This learning resource introduces the basic concepts and techniques related to verification, as well as how to use the open-source tools provided by this project for chip verification.
Before studying this material, it is assumed that you already have basic knowledge of Linux, Python, etc.
If you plan to participate in the “Open Source Verification Projects” published on this platform, it is recommended to complete the study of this material in advance.
1 - Quick Start
How to use the open verification platform to participate in hardware verification.
This page will briefly introduce what verification is, as well as concepts used in the examples, such as DUT (Design Under Test) and RM (Reference Model).
Chip Verification
Chip verification is a crucial step to ensure the correctness and reliability of chip designs, including functional verification, formal verification, and physical verification. This material only covers functional verification, focusing on simulation-based chip functional verification. The processes and methods of chip functional verification have many similarities with software testing, such as unit testing, system testing, black-box testing, and white-box testing. They also share similar metrics, such as functional coverage and code coverage. In essence, apart from the different tools and programming languages used, their goals and processes are almost identical. Thus, software test engineers should be able to perform chip verification without considering the tools and programming languages. However, in practice, software testing and chip verification are two completely separate fields, primarily due to the different verification tools and languages, making it difficult for software test engineers to crossover. In chip verification, hardware description languages (e.g., Verilog or SystemVerilog) and specialized commercial tools for circuit simulation are commonly used. Hardware description languages differ from high-level software programming languages like C++/Python, featuring a unique “clock” characteristic, which poses a high learning curve for software engineers.
To bridge the gap between chip verification and traditional software testing, allowing more people to participate in chip verification, this project provides the following content:
Multi-language verification tools (Picker), allowing users to use their preferred programming language for chip verification.
Verification framework (MLVP), enabling functional verification without worrying about the clock.
Introduction to basic circuits and verification knowledge, helping software enthusiasts understand circuit characteristics more easily.
Basic learning materials for fundamental verification knowledge.
Real high-performance chip verification cases, allowing enthusiasts to participate in verification work remotely.
Basic Terms
DUT: Design Under Test, usually referring to the designed RTL code.
RM: Reference Model, a standard error-free model corresponding to the unit under test.
RTL: Register Transfer Level, typically referring to the Verilog or VHDL code corresponding to the chip design.
Coverage: The percentage of the test range relative to the entire requirement range. In chip verification, this typically includes line coverage, function coverage, and functional coverage.
DV: Design Verification, referring to the collaboration of design and verification.
Differential Testing (difftest): Selecting two (or more) functionally identical units under test, submitting the same test cases that meet the unit’s requirements to observe whether there are differences in the execution results.
Tool Introduction
The core tool used in this material is Picker(https://github.com/XS-MLVP/picker). Its purpose is to automatically provide high-level programming language interfaces (Python/C++) for RTL-written design modules. Based on this tool, verification personnel with a software development (testing) background can perform chip verification without learning hardware description languages like Verilog/VHDL.
System Requirements
Recommended operating system: Ubuntu 22.04 LTS
In the development and research of system architecture, Linux is the most commonly used platform, mainly because Linux has a rich set of software and tool resources. Due to its open-source nature, important tools and software (such as Verilator) can be easily developed for Linux. In this course, multi-language verification tools like Picker and Swig can run stably on Linux.
1.1 - Setting Up the Verification Environment
Install the necessary dependencies, download, build, and install the required tools.
Please ensure that the tools like verible-verilog-format have been added to the environment variable $PATH, so they can be called directly from the command line.
Source Code Download
git clone https://github.com/XS-MLVP/picker.git --depth=1cd picker
make init
Build and Install
cd picker
make
# You can enable support for other languages by # using `make BUILD_XSPCOMM_SWIG=python,java,scala,golang`.# Each language requires its own development environment, # which needs to be configured separately, such as `javac` for Java.sudo -E make install
The default installation path is /usr/local, with binary files placed in /usr/local/bin and template files in /usr/local/share/picker.
If you need to change the installation directory, you can pass arguments to cmake by specifying ARGS, for example: make ARGS="-DCMAKE_INSTALL_PREFIX=your_install_dir"
The installation will automatically install the xspcomm base library (https://github.com/XS-MLVP/xcomm), which is used to encapsulate the basic types of RTL modules, located at /usr/local/lib/libxspcomm.so. You may need to manually set the link directory parameters (-L) during compilation.
If support for languages such as Java is enabled, the corresponding xspcomm multi-language packages will also be installed.
picker can also be compiled into a wheel file and installed via pip
To package picker into a wheel installation package, use the following command:
make wheel # or BUILD_XSPCOMM_SWIG=python,java,scala,golang make wheel
After compilation, the wheel file will be located in the dist directory. You can then install it via pip, for example:
After installation, execute the picker command to except the flow output:
XDut Generate.
Convert DUT(*.v/*.sv) to C++ DUT libs.
Usage: ./build/bin/picker [OPTIONS] [SUBCOMMAND]
Options:
-h,--help Print this help message and exit
-v,--version Print version
--show_default_template_path
Print default template path
--show_xcom_lib_location_cpp
Print xspcomm lib and include location
--show_xcom_lib_location_java
Print xspcomm-java.jar location
--show_xcom_lib_location_scala
Print xspcomm-scala.jar location
--show_xcom_lib_location_python
Print python module xspcomm location
--show_xcom_lib_location_golang
Print golang module xspcomm location
--check check install location and supproted languages
Subcommands:
export Export RTL Projects Sources as Software libraries such as C++/Python
pack Pack UVM transaction as a UVM agent and Python class
Installation Test
picker currently has two subcommands: export and pack.
The export subcommand is used to convert RTL designs into “libraries” corresponding to other high-level programming languages, which can be driven through software.
$picker export –help
Export RTL Projects Sources as Software libraries such as C++/Python
Usage: picker export[OPTIONS] file...
Positionals:
file TEXT ... REQUIRED DUT .v/.sv source file, contain the top module
Options:
-h,--help Print this help message and exit --fs,--filelist TEXT ... DUT .v/.sv source files, contain the top module, split by comma.
Or use '*.txt' file with one RTL file path per line to specify the file list
--sim TEXT [verilator] vcs or verilator as simulator, default is verilator
--lang,--language TEXT:{python,cpp,java,scala,golang}[python] Build example project, default is python, choose cpp, java or python
--sdir,--source_dir TEXT [/home/yaozhicheng/workspace/picker/template] Template Files Dir, default is ${picker_install_path}/../picker/template
--tdir,--target_dir TEXT [./picker_out] Codegen render files to target dir, default is ./picker_out
--sname,--source_module_name TEXT ...
Pick the module in DUT .v file, default is the last module in the -f marked file
--tname,--target_module_name TEXT
Set the module name and file name of target DUT, default is the same as source.
For example, -T top, will generate UTtop.cpp and UTtop.hpp with UTtop class
--internal TEXT Exported internal signal config file, default is empty, means no internal pin
-F,--frequency TEXT [100MHz] Set the frequency of the **only VCS** DUT, default is 100MHz, use Hz, KHz, MHz, GHz as unit
-w,--wave_file_name TEXT Wave file name, emtpy mean don't dump wave
-c,--coverage Enable coverage, default is not selected as OFF
--cp_lib,--copy_xspcomm_lib BOOLEAN [1]
Copy xspcomm lib to generated DUT dir, default is true
-V,--vflag TEXT User defined simulator compile args, passthrough.
Eg: '-v -x-assign=fast -Wall --trace' || '-C vcs -cc -f filelist.f'
-C,--cflag TEXT User defined gcc/clang compile command, passthrough. Eg:'-O3 -std=c++17 -I./include' --verbose Verbose mode
-e,--example Build example project, default is OFF
--autobuild BOOLEAN [1] Auto build the generated project, default is true
Static Multi-Module Support:
When generating the wrapper for dut_top.sv/v, picker allows specifying multiple module names and their corresponding quantities using the --sname parameter. For example, if there are modules A and B in the design files a.v and b.v respectively, and you need 2 instances of A and 3 instances of B in the generated DUT, and the combined module name is C (if not specified, the default name will be A_B). This can be achieved using the following command:
picker path/a.v,path/b.v --sname A,2,B,3 --tname C
Environment Variables:
DUMPVARS_OPTION: Sets the option parameter for $dumpvars. For example, DUMPVARS_OPTION="+mda" picker .... enables array waveform support in VCS.
SIMULATOR_FLAGS: Parameters passed to the backend simulator. Refer to the documentation of the specific backend simulator for details.
CFLAGS: Sets the -cflags parameter for the backend simulator.
The pack subcommand is used to convert UVM sequence_item into other languages and then communicate through TLM (currently supports Python, other languages are under development).
$picker pack –help
Pack uvm transaction as a uvm agent and python class
Usage: picker pack [OPTIONS] file...
Positionals:
file TEXT ... REQUIRED Sv source file, contain the transaction define
Options:
-h,--help Print this help message and exit -e,--example Generate example project based on transaction, default is OFF
-c,--force Force delete folder when the code has already generated by picker
-r,--rename TEXT ... Rename transaction name in picker generate code
Test Examples
After picker compilation, execute the following commands in the picker directory to test the examples:
Demonstrates the principles and usage of the tool based on a simple adder verification. This adder is implemented using simple combinational logic.
RTL Source Code
In this case, we drive a 64-bit adder (combinational circuit) with the following source code:
// A verilog 64-bit full adder with carry in and carry out
moduleAdder#(parameterWIDTH=64)(input[WIDTH-1:0]a,input[WIDTH-1:0]b,inputcin,output[WIDTH-1:0]sum,outputcout);assign{cout,sum}=a+b+cin;endmodule
This adder contains a 64-bit adder with inputs of two 64-bit numbers and a carry-in signal, outputting a 64-bit sum and a carry-out signal.
Testing Process
During the testing process, we will create a folder named Adder, containing a file called Adder.v. This file contains the above RTL source code.
Exporting RTL to Python Module
Generating Intermediate Files
Navigate to the Adder folder and execute the following command:
Uses Adder.v as the top file, with Adder as the top module, and generates a dynamic library using the Verilator simulator with Python as the target language.
Enables waveform output, with the target waveform file as Adder.fst.
Includes files for driving the example project (-e), and does not automatically compile after code generation (-autobuild=false).
The final file output path is picker_out_adder.
Some command-line parameters were not used in this command, and they will be introduced in later sections.
The output directory structure is as follows. Note that these are all intermediate files and cannot be used directly:
picker_out_adder
|-- Adder.v # Original RTL source code|-- Adder_top.sv # Generated Adder_top top-level wrapper, using DPI to drive Adder module inputs and outputs|-- Adder_top.v # Generated Adder_top top-level wrapper in Verilog, needed because Verdi does not support importing SV source code|-- CMakeLists.txt # For invoking the simulator to compile the basic C++ class and package it into a bare DPI function binary dynamic library (libDPIAdder.so)|-- Makefile # Generated Makefile for invoking CMakeLists.txt, allowing users to compile libAdder.so through the make command, with manual adjustment of Makefile configuration parameters, or to compile the example project|-- cmake # Generated cmake folder for invoking different simulators to compile RTL code||-- vcs.cmake
|`-- verilator.cmake
|-- cpp # CPP example directory containing sample code||-- CMakeLists.txt # For wrapping libDPIAdder.so using basic data types into a directly operable class (libUTAdder.so), not just bare DPI functions||-- Makefile
||-- cmake
|||-- vcs.cmake
||`-- verilator.cmake
||-- dut.cpp # Generated CPP UT wrapper, including calls to libDPIAdder.so, and UTAdder class declaration and implementation||-- dut.hpp # Header file|`-- example.cpp # Sample code calling UTAdder class|-- dut_base.cpp # Base class for invoking and driving simulation results from different simulators, encapsulated into a unified class to hide all simulator-related code details|-- dut_base.hpp
|-- filelist.f # Additional file list for multi-file projects, check the -f parameter introduction. Empty in this case|-- mk
||-- cpp.mk # Controls Makefile when targeting C++ language, including logic for compiling example projects (-e, example)|`-- python.mk # Same as above, but with Python as the target language`-- python
|-- CMakeLists.txt
|-- Makefile
|-- cmake
||-- vcs.cmake
|`-- verilator.cmake
|-- dut.i # SWIG configuration file for exporting libDPIAdder.so’s base class and function declarations to Python, enabling Python calls`-- dut.py # Generated Python UT wrapper, including calls to libDPIAdder.so, and UTAdder class declaration and implementation, equivalent to libUTAdder.so
Building Intermediate Files
Navigate to the picker_out_adder directory and execute the make command to generate the final files.
Use the simulator invocation script defined by cmake/*.cmake to compile Adder_top.sv and related files into the libDPIAdder.so dynamic library.Use the compilation script defined by CMakeLists.txt to wrap libDPIAdder.so into the libUTAdder.so dynamic library through dut_base.cpp. Both outputs from steps 1 and 2 are copied to the UT_Adder directory.Generate the wrapper layer using the SWIG tool with dut_base.hpp and dut.hpp header files, and finally build a Python module in the UT_Adder directory.If the -e parameter is included, the pre-defined example.py is placed in the parent directory of the UT_Adder directory as a sample code for calling this Python module.
The final directory structure is:
.
|-- Adder.fst # Waveform file for testing|-- UT_Adder
||-- Adder.fst.hier
||-- _UT_Adder.so # Wrapper dynamic library generated by SWIG||-- __init__.py # Python module initialization file, also the library definition file||-- libDPIAdder.a # Library file generated by the simulator||-- libUTAdder.so # DPI dynamic library wrapper generated based on dut_base|`-- libUT_Adder.py # Python module generated by SWIG|`-- xspcomm # Base library folder, no need to pay attention to this`-- example.py # Sample code
Setting Up Test Code
Replace the content in example.py with the following Python test code.
fromAdderimport*importrandom# Generate unsigned random numbersdefrandom_int():returnrandom.randint(-(2**63),2**63-1)&((1<<63)-1)# Reference model for the adder implemented in Pythondefreference_adder(a,b,cin):sum=(a+b)&((1<<64)-1)carry=sum<asum+=cincarry=carryorsum<cinreturnsum,1ifcarryelse0defrandom_test():# Create DUTdut=DUTAdder()# By default, pin assignments do not write immediately but write on the next clock rising edge, which is suitable for sequential circuits. However, since the Adder is a combinational circuit, we need to write immediately# Therefore, the AsImmWrite() method is called to change pin assignment behaviordut.a.AsImmWrite()dut.b.AsImmWrite()dut.cin.AsImmWrite()# Loop testforiinrange114514):a,b,cin=random_int(),random_int(),random_int()&1# DUT: Assign values to Adder circuit pins, then drive the combinational circuit (for sequential circuits or waveform viewing, use dut.Step() to drive)dut.a.value,dut.b.value,dut.cin.value=a,b,cindut.RefreshComb()# Reference model: Calculate resultsref_sum,ref_cout=reference_adder(a,b,cin)# Check resultsassertdut.sum.value==ref_sum,"sum mismatch: 0x{dut.sum.value:x} != 0x{ref_sum:x}"assertdut.cout.value==ref_cout,"cout mismatch: 0x{dut.cout.value:x} != 0x{ref_cout:x}"print(f"[test {i}] a=0x{a:x}, b=0x{b:x}, cin=0x{cin:x} => sum: 0x{ref_sum}, cout: 0x{ref_cout}")# Test completedut.Finish()print("Test Passed")if__name__=="__main__":random_test()
Running the Test
In the picker_out_adder directory, execute the python3 example.py command to run the test. After the test is complete, we can see the output of the example project.
This random number generator contains a 16-bit LFSR, with a 16-bit seed as input and a 16-bit random number as output. The LFSR is updated according to the following rules:
XOR the highest bit and the second-highest bit of the current LFSR to generate a new_bit.
Shift the original LFSR left by one bit, and place new_bit in the lowest bit.
Discard the highest bit.
Testing Process
During testing, we will create a folder named RandomGenerator, which contains a RandomGenerator.v file. The content of this file is the RTL source code mentioned above.
Building the RTL into a Python Module
Generating Intermediate Files
Navigate to the RandomGenerator folder and execute the following command:
Uses RandomGenerator.v as the top file and RandomGenerator as the top module, generating a dynamic library with the Verilator simulator, targeting Python as the output language.
Enables waveform output, with the target waveform file being RandomGenerator.fst.
Includes files for driving the example project (-e), and does not automatically compile after code generation (-autobuild=false).
Navigate to the picker_out_rmg directory and execute the make command to generate the final files.
Note: The compilation process is similar to Adder Verification - Compilation Process , so it will not be elaborated here.
The final directory structure will be:
picker_out_rmg
|-- RandomGenerator.fst # Waveform file from the test|-- UT_RandomGenerator
||-- RandomGenerator.fst.hier
||-- _UT_RandomGenerator.so # Swig-generated wrapper dynamic library||-- __init__.py # Initialization file for the Python module, also the library definition file||-- libDPIRandomGenerator.a # Library file generated by the simulator||-- libUTRandomGenerator.so # libDPI dynamic library wrapper generated based on dut_base|`-- libUT_RandomGenerator.py # Python module generated by Swig|`-- xspcomm # xspcomm base library, fixed folder, no need to pay attention to it`-- example.py # Example code
Configuring the Test Code
Replace the content of example.py with the following code.
fromRandomGeneratorimport*importrandom# Define the reference modelclassLFSR_16:def__init__(self,seed):self.state=seed&((1<<16)-1)defStep(self):new_bit=(self.state>>15)^(self.state>>14)&1self.state=((self.state<<1)|new_bit)&((1<<16)-1)if__name__=="__main__":dut=DUTRandomGenerator()# Create the DUTdut.InitClock("clk")# Specify the clock pin and initialize the clockseed=random.randint(0,2**16-1)# Generate a random seeddut.seed.value=seed# Set the DUT seedref=LFSR_16(seed)# Create a reference model for comparison# Reset the DUTdut.reset.value=1# Set reset signal to 1dut.Step()# Advance one clock cycle (DUTRandomGenerator is a sequential circuit, it requires advancing via Step)dut.reset.value=0# Set reset signal to 0dut.Step()# Advance one clock cycleforiinrange(65536):# Loop 65536 timesdut.Step()# Advance one clock cycle for the DUT, generating a random numberref.Step()# Advance one clock cycle for the reference model, generating a random numberassertdut.random_number.value==ref.state,"Mismatch"# Compare the random numbers generated by the DUT and the reference modelprint(f"Cycle {i}, DUT: {dut.random_number.value:x}, REF: {ref.state:x}")# Print the results# Complete the testprint("Test Passed")dut.Finish()# Finish function will complete the writing of waveform, coverage, and other files
Running the Test Program
Execute python example.py in the picker_out_rmg directory to run the test program. After the execution, if Test Passed is output, the test is considered passed. After the run is complete, a waveform file RandomGenerator.fst will be generated, which can be viewed in the terminal using the following command:
A dual-port stack is a stack with two ports, each supporting push and pop operations. This case study uses a dual-port stack as an example to demonstrate how to use callback functions to drive the DUT.
Introduction to the Dual-Port Stack
A dual-port stack is a data structure that supports simultaneous operations on two ports. Compared to a traditional single-port stack, a dual-port stack allows simultaneous read and write operations. In scenarios such as multithreaded concurrent read and write operations, the dual-port stack can provide better performance. In this example, we provide a simple dual-port stack implementation, with the source code as follows:
In this implementation, aside from the clock signal (clk) and reset signal (rst), there are also input and output signals for the two ports, which have the same interface definition. The meaning of each signal for the ports is as follows:
When we want to perform an operation on the stack through a port, we first need to write the required data and command to the input port, and then wait for the output port to return the result.
Specifically, if we want to perform a PUSH operation on the stack, we should first write the data to be pushed into in_data, then set in_cmd to 0, indicating a PUSH operation, and set in_valid to 1, indicating that the input data is valid. Next, we need to wait for in_ready to be 1, ensuring that the data has been correctly received, at which point the PUSH request has been correctly sent.After the command is successfully sent, we need to wait for the stack’s response information on the response port. When out_valid is 1, it indicates that the stack has completed the corresponding operation. At this point, we can read the stack’s returned data from out_data (the returned data of the POP operation will be placed here) and read the stack’s returned command from out_cmd. After reading the data, we need to set out_ready to 1 to notify the stack that the returned information has been correctly received.
If requests from both ports are valid simultaneously, the stack will prioritize processing requests from port 0.
Setting Up the Driver Environment
Similar to Case Study 1 and Case Study 2, before testing the dual-port stack, we first need to use the Picker tool to build the RTL code into a Python Module. After the build is complete, we will use a Python script to drive the RTL code for testing.
First, create a file named dual_port_stack.v and copy the above RTL code into this file. Then, execute the following command in the same folder:
The generated driver environment is located in the picker_out_dual_port_stack folder. Inside, UT_dual_port_stack is the generated Python Module, and example.py is the test script.
You can run the test script with the following commands:
cd picker_out_dual_port_stack
python3 example.py
If no errors occur during the run, it means the environment has been set up correctly.
Driving the DUT with Coroutines
Driving the DUT with Callback Functions
In this case, we need to drive a dual-port stack to test its functionality. However, you may quickly realize that the methods used in Cases 1 and 2 are insufficient for driving a dual-port stack. In the previous tests, the DUT had a single execution logic where you input data into the DUT and wait for the output.
However, a dual-port stack is different because its two ports operate with independent execution logic. During the drive process, these two ports might be in entirely different states. For example, while port 0 is waiting for data from the DUT, port 1 might be sending a new request. In such situations, simple sequential execution logic will struggle to drive the DUT effectively.
Therefore, in this case, we will use the dual-port stack as an example to introduce a callback function-based driving method to handle such DUTs.
Introduction to Callback Functions
A callback function is a common programming technique that allows us to pass a function as an argument, which is then called when a certain condition is met. In the generated Python Module, we provide an interface StepRis for registering callback functions with the internal execution environment. Here’s how it works:
fromdual_port_stackimportDUTdual_port_stackdefcallback(cycles):print(f"The current clock cycle is {cycles}")dut=DUTdual_port_stack()dut.StepRis(callback)dut.Step(10)
You can run this code directly to see the effect of the callback function.
In the above code, we define a callback function callback that takes a cycles parameter and prints the current clock cycle each time it is called. We then register this callback function to the DUT via StepRis.Once the callback function is registered, each time the Step function is run, which corresponds to each clock cycle, the callback function is invoked on the rising edge of the clock signal, with the current clock cycle passed as an argument.
Using this approach, we can write different execution logics as callback functions and register multiple callback functions to the DUT, thereby achieving parallel driving of the DUT.
Dual-Port Stack Driven by Callback Functions
To complete a full execution logic using callback functions, we typically write it in the form of a state machine. Each callback function invocation triggers a state change within the state machine, and multiple invocations complete a full execution logic.
Below is an example code for driving a dual-port stack using callback functions:
importrandomfromdual_port_stackimport*fromenumimportEnumclassStackModel:def__init__(self):self.stack=[]defcommit_push(self,data):self.stack.append(data)print("push",data)defcommit_pop(self,dut_data):print("Pop",dut_data)model_data=self.stack.pop()assertmodel_data==dut_data,f"The model data {model_data} is not equal to the dut data {dut_data}"print(f"Pass: {model_data} == {dut_data}")classSinglePortDriver:classStatus(Enum):IDLE=0WAIT_REQ_READY=1WAIT_RESP_VALID=2classBusCMD(Enum):PUSH=0POP=1PUSH_OKAY=2POP_OKAY=3def__init__(self,dut,model:StackModel,port_dict):self.dut=dutself.model=modelself.port_dict=port_dictself.status=self.Status.IDLEself.operation_num=0self.remaining_delay=0defpush(self):self.port_dict["in_valid"].value=1self.port_dict["in_cmd"].value=self.BusCMD.PUSH.valueself.port_dict["in_data"].value=random.randint(0,2**32-1)defpop(self):self.port_dict["in_valid"].value=1self.port_dict["in_cmd"].value=self.BusCMD.POP.valuedefstep_callback(self,cycle):ifself.status==self.Status.WAIT_REQ_READY:ifself.port_dict["in_ready"].value==1:self.port_dict["in_valid"].value=0self.port_dict["out_ready"].value=1self.status=self.Status.WAIT_RESP_VALIDifself.port_dict["in_cmd"].value==self.BusCMD.PUSH.value:self.model.commit_push(self.port_dict["in_data"].value)elifself.status==self.Status.WAIT_RESP_VALID:ifself.port_dict["out_valid"].value==1:self.port_dict["out_ready"].value=0self.status=self.Status.IDLEself.remaining_delay=random.randint(0,5)ifself.port_dict["out_cmd"].value==self.BusCMD.POP_OKAY.value:self.model.commit_pop(self.port_dict["out_data"].value)ifself.status==self.Status.IDLE:ifself.remaining_delay==0:ifself.operation_num<10:self.push()elifself.operation_num<20:self.pop()else:returnself.operation_num+=1self.status=self.Status.WAIT_REQ_READYelse:self.remaining_delay-=1deftest_stack(stack):model=StackModel()port0=SinglePortDriver(stack,model,{"in_valid":stack.in0_valid,"in_ready":stack.in0_ready,"in_data":stack.in0_data,"in_cmd":stack.in0_cmd,"out_valid":stack.out0_valid,"out_ready":stack.out0_ready,"out_data":stack.out0_data,"out_cmd":stack.out0_cmd,})port1=SinglePortDriver(stack,model,{"in_valid":stack.in1_valid,"in_ready":stack.in1_ready,"in_data":stack.in1_data,"in_cmd":stack.in1_cmd,"out_valid":stack.out1_valid,"out_ready":stack.out1_ready,"out_data":stack.out1_data,"out_cmd":stack.out1_cmd,})dut.StepRis(port0.step_callback)dut.StepRis(port1.step_callback)dut.Step(200)if__name__=="__main__":dut=DUTdual_port_stack()dut.InitClock("clk")test_stack(dut)dut.Finish()
In the code above, the driving process is implemented such that each port independently drives the DUT, with a random delay added after each request is completed. Each port performs 10 PUSH operations and 10 POP operations.When a PUSH or POP request takes effect, the corresponding commit_push or commit_pop function in the StackModel is called to simulate stack behavior. After each POP operation, the data returned by the DUT is compared with the model’s data to ensure consistency.To implement the driving behavior for a single port, we created the SinglePortDriver class, which includes a method for sending and receiving data. The step_callback function handles the internal update logic.In the test_stack function, we create a SinglePortDriver instance for each port of the dual-port stack, pass the corresponding interfaces, and register the callback function to the DUT using the StepRis function. When dut.Step(200) is called, the callback function is automatically invoked each clock cycle to complete the entire driving logic.SinglePortDriver Driving Logic As mentioned earlier, callback functions typically require the execution logic to be implemented as a state machine. Therefore, in the SinglePortDriver class, the status of each port is recorded, including:
IDLE: Idle state, waiting for the next operation.
In the idle state, check the remaining_delay status to determine whether the current delay has ended. If the delay has ended, proceed with the next operation; otherwise, continue waiting.
When the next operation is ready, check the operation_num status (the number of operations already performed) to determine whether the next operation should be PUSH or POP. Then, call the corresponding function to assign values to the port and switch the status to WAIT_REQ_READY.
WAIT_REQ_READY: Waiting for the request port to be ready.
After the request is sent (in_valid is valid), wait for the in_ready signal to be valid to ensure the request has been correctly received.
Once the request is correctly received, set in_valid to 0 and out_ready to 1, indicating the request is complete and ready to receive a response.
WAIT_RESP_VALID: Waiting for the response port to return data.
After the request is correctly received, wait for the DUT’s response, i.e., wait for the out_valid signal to be valid. When the out_valid signal is valid, it indicates that the response has been generated and the request is complete. Set out_ready to 0 and switch the status to IDLE.
Running the Test
Copy the above code into example.py, and then run the following command:
cd picker_out_dual_port_stack
python3 example.py
You can run the test code for this case directly, and you will see output similar to the following:
In the output, you can see the data for each PUSH and POP operation, as well as the result of each POP operation. If there is no error message in the output, it indicates that the test has passed.
Pros and Cons of Callback-Driven Design
By using callbacks, we can achieve parallel driving of the DUT, as demonstrated in this example. We utilized two callbacks to drive two ports with independent execution logic. In simple scenarios, callbacks offer a straightforward method for parallel driving.
However, as shown in this example, even implementing a simple “request-response” flow requires maintaining a significant amount of internal state. Callbacks break down what should be a cohesive execution logic into multiple function calls, adding considerable complexity to both the code writing and debugging processes.
1.5 - Case 4: Dual-Port Stack (Coroutines)
The dual-port stack is a stack with two ports, each supporting push and pop operations. This case study uses the dual-port stack as an example to demonstrate how to drive a DUT using coroutines.
Introduction to the Dual-Port Stack and Environment Setup
In Case 3, we used callbacks to drive the DUT. While callbacks offer a way to perform parallel operations, they break the execution flow into multiple function calls and require maintaining a large amount of intermediate state, making the code more complex to write and debug.
In this case, we will introduce a method of driving the DUT using coroutines. This method not only allows for parallel operations but also avoids the issues associated with callbacks.
Introduction to Coroutines
Coroutines are a form of “lightweight” threading that enables behavior similar to concurrent execution without the overhead of traditional threads. Coroutines operate on a single-threaded event loop, where multiple coroutines can be defined and added to the event loop, with the event loop managing their scheduling.
Typically, a defined coroutine will continue to execute until it encounters an event that requires waiting. At this point, the event loop pauses the coroutine and schedules other coroutines to run. Once the event occurs, the event loop resumes the paused coroutine to continue execution.
For parallel execution in hardware verification, this behavior is precisely what we need. We can create multiple coroutines to handle various verification tasks. We can treat the clock execution as an event, and within each coroutine, wait for this event. When the clock signal arrives, the event loop wakes up all the waiting coroutines, allowing them to continue executing until they wait for the next clock signal.
We use Python’s asyncio to implement coroutine support:
You can run the above code directly to observe the execution of coroutines. In the code, we use create_task to create two coroutine tasks and add them to the event loop. Each coroutine task continuously prints a number and waits for the next clock signal.We use dut.RunStep(10) to create a background clock, which continuously generates clock synchronization signals, allowing other coroutines to continue execution when the clock signal arrives.
Driving the Dual-Port Stack with Coroutines
Using coroutines, we can write the logic for driving each port of the dual-port stack as an independent execution flow without needing to maintain a large amount of intermediate state.
Below is a simple verification code using coroutines:
importasyncioimportrandomfromdual_port_stackimport*fromenumimportEnumclassStackModel:def__init__(self):self.stack=[]defcommit_push(self,data):self.stack.append(data)print("Push",data)defcommit_pop(self,dut_data):print("Pop",dut_data)model_data=self.stack.pop()assertmodel_data==dut_data,f"The model data {model_data} is not equal to the dut data {dut_data}"print(f"Pass: {model_data} == {dut_data}")classSinglePortDriver:classBusCMD(Enum):PUSH=0POP=1PUSH_OKAY=2POP_OKAY=3def__init__(self,dut,model:StackModel,port_dict):self.dut=dutself.model=modelself.port_dict=port_dictasyncdefsend_req(self,is_push):self.port_dict["in_valid"].value=1self.port_dict["in_cmd"].value=self.BusCMD.PUSH.valueifis_pushelseself.BusCMD.POP.valueself.port_dict["in_data"].value=random.randint(0,2**8-1)awaitself.dut.AStep(1)awaitself.dut.Acondition(lambda:self.port_dict["in_ready"].value==1)self.port_dict["in_valid"].value=0ifis_push:self.model.commit_push(self.port_dict["in_data"].value)asyncdefreceive_resp(self):self.port_dict["out_ready"].value=1awaitself.dut.AStep(1)awaitself.dut.Acondition(lambda:self.port_dict["out_valid"].value==1)self.port_dict["out_ready"].value=0ifself.port_dict["out_cmd"].value==self.BusCMD.POP_OKAY.value:self.model.commit_pop(self.port_dict["out_data"].value)asyncdefexec_once(self,is_push):awaitself.send_req(is_push)awaitself.receive_resp()for_inrange(random.randint(0,5)):awaitself.dut.AStep(1)asyncdefmain(self):for_inrange(10):awaitself.exec_once(is_push=True)for_inrange(10):awaitself.exec_once(is_push=False)asyncdeftest_stack(stack):model=StackModel()port0=SinglePortDriver(stack,model,{"in_valid":stack.in0_valid,"in_ready":stack.in0_ready,"in_data":stack.in0_data,"in_cmd":stack.in0_cmd,"out_valid":stack.out0_valid,"out_ready":stack.out0_ready,"out_data":stack.out0_data,"out_cmd":stack.out0_cmd,})port1=SinglePortDriver(stack,model,{"in_valid":stack.in1_valid,"in_ready":stack.in1_ready,"in_data":stack.in1_data,"in_cmd":stack.in1_cmd,"out_valid":stack.out1_valid,"out_ready":stack.out1_ready,"out_data":stack.out1_data,"out_cmd":stack.out1_cmd,})asyncio.create_task(port0.main())asyncio.create_task(port1.main())awaitasyncio.create_task(dut.RunStep(200))if__name__=="__main__":dut=DUTdual_port_stack()dut.InitClock("clk")asyncio.run(test_stack(dut))dut.Finish()
Similar to Case 3, we define a SinglePortDriver class to handle the logic for driving a single port. In the main function, we create two instances of SinglePortDriver, each responsible for driving one of the two ports. We place the driving processes for both ports in the main function and add them to the event loop using asyncio.create_task. Finally, we use dut.RunStep(200) to create a background clock to drive the test.
This code implements the same test logic as in Case 3, where each port performs 10 PUSH and 10 POP operations, followed by a random delay after each operation. As you can see, using coroutines eliminates the need to maintain any intermediate state.
SinglePortDriver Logic In the SinglePortDriver class, we encapsulate a single operation into the exec_once function. In the main function, we first call exec_once(is_push=True) 10 times to complete the PUSH operations, and then call exec_once(is_push=False) 10 times to complete the POP operations.In the exec_once function, we first call send_req to send a request, then call receive_resp to receive the response, and finally wait for a random number of clock signals to simulate a delay.The send_req and receive_resp functions have similar logic; they set the corresponding input/output signals to the appropriate values and wait for the corresponding signals to become valid. The implementation can be written according to the execution sequence of the ports.Similarly, we use the StackModel class to simulate stack behavior. The commit_push and commit_pop functions simulate the PUSH and POP operations, respectively, with the POP operation comparing the data.
Running the Test
Copy the above code into example.py and then execute the following commands:
cd picker_out_dual_port_stack
python3 example.py
You can run the test code for this case directly, and you will see output similar to the following:
In the output, you can see the data for each PUSH and POP operation, as well as the result of each POP operation. If there are no error messages in the output, it indicates that the test passed.
Pros and Cons of Coroutine-Driven Design
Using coroutine functions, we can effectively achieve parallel operations while avoiding the issues that come with callback functions. Each independent execution flow can be fully retained as a coroutine, which greatly simplifies code writing.
However, in more complex scenarios, you may find that having many coroutines can make synchronization and timing management between them more complicated. This is especially true when you need to synchronize between two coroutines that do not directly interact with the DUT.
At this point, you’ll need a set of coroutine writing standards and design patterns for verification code to help you write coroutine-based verification code more effectively. Therefore, we provide the mlvp library, which offers a set of design patterns for coroutine-based verification code. You can learn more about mlvp and how it can help you write better verification code by visiting here .
2 - Environment Usage
Detailed usage instructions for the Open Verification Platform environment.
This page will briefly introduce what verification is and concepts used in the examples, such as DUT (Design Under Test) and RM (Reference Model).
2.1 - Tool Introduction
Basic usage of the verification tool.
To meet the requirements of an open verification environment, we have developed the Picker tool, which is used to convert RTL designs into multi-language interfaces for verification. We will use the environment generated by the Picker tool as the basic verification environment. Next, we will introduce the Picker tool and its basic usage.
Introduction to Picker
Picker is an auxiliary tool for chip verification with two main functions:
Packaging RTL Design Verification Modules: Picker can package RTL design verification modules (.v/.scala/.sv) into dynamic libraries and provide programming interfaces in various high-level languages (currently supporting C++, Python, Java, Scala, Golang) to drive the circuit.
Automatic UVM-TLM Code Generation: Picker can automate TLM code encapsulation based on the UVM sequence_item provided by the user, providing a communication interface between UVM and other high-level languages such as Python.
This tool allows users to perform chip unit testing based on existing software testing frameworks such as pytest, junit, TestNG, go test, etc.
Advantages of Verification Using Picker:
No RTL Design Leakage: After conversion by Picker, the original design files (.v) are transformed into binary files (.so). Verification can still be performed without the original design files, and the verifier cannot access the RTL source code.
Reduced Compilation Time: When the DUT (Design Under Test) is stable, it only needs to be compiled once (packaged into a .so file).
Wide User Base: With support for multiple programming interfaces, it caters to developers of various languages.
Utilization of a Rich Software Ecosystem: Supports ecosystems such as Python3, Java, Golang, etc.
Automated UVM Transaction Encapsulation: Enables communication between UVM and Python through automated UVM transaction encapsulation.
RTL Simulators Currently Supported by Picker:
Verilator
Synopsys VCS
Working Principle of Picker The main function of Picker is to convert Verilog code into C++ or Python code. For example, using a processor developed with Chisel: first, it is converted into Verilog code through Chisel’s built-in tools, and then Picker provides high-level programming language interfaces.
Python Module Generation
Process of Module Generation
Picker exports Python modules based on C++.
Picker is a code generation tool. It first generates project files and then uses make to compile them into binary files.
Picker first uses a simulator to compile the RTL code into a C++ class and then compiles it into a dynamic library (see the C++ steps for details).
Using the Swig tool, Picker then exports the dynamic library as a Python module based on the C++ header file definitions generated in the previous step.
Finally, the generated module is exported to a directory, with other intermediate files being either cleaned up or retained as needed.
Swig is a tool used to export C/C++ code to other high-level languages. It parses C++ header files and generates corresponding intermediate code.
For detailed information on the generation process, please refer to the Swig official documentation .
For information on how Picker generates C++ classes, please refer to C++ .
The generated module can be imported and used by other Python programs, with a file structure similar to that of standard Python modules.
Using the Python Module
The --language python or --lang python parameter specifies the generation of the Python base library.
The --example, -e parameter generates an executable file containing an example project.
The --verbose, -v parameter preserves intermediate files generated during project creation.
Picker automatically generates a base class in Python, referred to as the DUT class. For the adder example, the user needs to write test cases, importing the Python module generated in the previous section and calling its methods to operate on the hardware module. The directory structure is as follows:
The DUTAdder class has a total of eight methods, as shown below:
classDUTAdder:defInitClock(name:str)# Initialize clock, with the clock pin name as a parameter, e.g., clkdefStep(i:int=1)# Advance the circuit by i cyclesdefStepRis(callback:Callable,args=None,args=(),kwargs={})# Set rising edge callback functiondefStepFal(callback:Callable,args=None,args=(),kwargs={})# Set falling edge callback functiondefSetWaveform(filename)# Set waveform filedefSetCoverage(filename)# Set code coverage filedefRefreshComb()# Advance combinational circuitdefFinish()# Destroy the circuit
Pins corresponding to the DUT, such as reset and clock, are represented as member variables in the DUTAdder class. As shown below, pin values can be read and written via the value attribute.
fromAdderimport*dut=DUTAdder()dut.a.value=1# Assign value to the pin by setting the .value attributedut.a[12]=1# Assign value to the 12th bit of the input pin ax=dut.a.value# Read the value of pin ay=dut.a[12]# Read the 12th bit of pin a
General Flow for Driving DUT
Create DUT and Set Pin Modes: By default, pins are assigned values on the rising edge of the next cycle. For combinational logic, you need to set the assignment mode to immediate assignment.
Initialize the Clock: This binds the clock pin to the internal xclock of the DUT. Combinational logic does not require a clock and can be ignored.
Reset the Circuit: Most sequential circuits need to be reset.
Write Data to DUT Input Pins: Use the pin.Set(x) interface or pin.value = x for assignment.
Drive the Circuit: Use Step for sequential circuits and RefreshComb for combinational circuits.
Obtain and Check Outputs of DUT Pins: For example, compare the results with a reference model using assertions.
Complete Verification and Destroy DUT: Calling Finish() will write waveform, coverage, and other information to files.
The corresponding pseudocode is as follows:
fromDUTimport*# 1 Createdut=DUT()# 2 Initializedut.SetWaveform("test.fst")dut.InitClock("clock")# 3 Resetdut.reset=1dut.Step(1)dut.reset=0dut.Step(1)# 4 Input Datadut.input_pin1.value=0x123123dut.input_pin3.value="0b1011"# 5 Drive the Circuitdut.Step(1)# 6 Get Resultsx=dut.output_pin.valueprint("result:",x)# 7 Destroydut.Finish()
Other Data Types
In general, most DUT verification tasks can be accomplished using the interfaces provided by the DUT class. However, for special cases, additional interfaces are needed, such as custom clocks, asynchronous operations, advancing combinational circuits and writing waveforms, and modifying pin properties.
In the DUT class generated by Picker, in addition to XData type pin member variables , there are also XClock type xclock and XPort type xport .
classDUTAdder(object):xport:XPort# Member variable xport for managing all pins in the DUTxclock:XClock# Member variable xclock for managing the clock# DUT Pinsa:XDatab:XDatacin:XDatacout:XData
XData Class
Data in DUT pins usually have an uncertain bit width and can be in one of four states: 0, 1, Z, and X. Picker provides XData to represent pin data in the circuit.
Main Methods
classXData:# Split XData, for example, create a separate XData for bits 7-10 of a 32-bit XData# name: Name, start: Start bit, width: Bit width, e.g., auto sub = a.SubDataRef("sub_pin", 0, 4)defSubDataRef(name,start,width):XDatadefGetWriteMode():WriteMode# Get the write mode of XData: Imme (immediate), Rise (rising edge), Fall (falling edge)defSetWriteMode(mode:WriteMode)# Set the write mode of XData, e.g., a.SetWriteMode(WriteMode::Imme)defDataValid():bool# Check if the data is valid (returns false if value contains X or Z states, otherwise true)defW():int# Get the bit width of XData (0 indicates XData is of Verilog's logic type, otherwise it's the width of Vec type)defU():int# Get the unsigned value of XData (e.g., x = a.value)defS():int# Get the signed value of XDatadefString():str# Convert XData to a hexadecimal string, e.g., "0x123ff", if ? appears, it means X or Z state in the corresponding 4 bitsdefEqual(xdata):bool# Compare two XData instances for equalitydefSet(value)# Assign value to XData, value can be XData, string, int, bytes, etc.defGetBytes():bytes# Get the value of XData in bytes formatdefConnect(xdata):bool# Connect two XData instances; only In and Out types can be connected. When Out data changes, In type XData will be automatically updated.defIsInIO():bool# Check if XData is of In type, which can be read and writtendefIsOutIO():bool# Check if XData is of Out type, which is read-onlydefIsBiIO():bool# Check if XData is of Bi type, which can be read and writtendefIsImmWrite():bool# Check if XData is in Imm write modedefIsRiseWrite():bool# Check if XData is in Rise write modedefIsFallWrite():bool# Check if XData is in Fall write modedefAsImmWrite()# Change XData's write mode to ImmdefAsRiseWrite()# Change XData's write mode to RisedefAsFallWrite()# Change XData's write mode to FalldefAsBiIO()# Change XData to Bi typedefAsInIO()# Change XData to In typedefAsOutIO()# Change XData to Out typedefFlipIOType()# Invert the IO type of XData, e.g., In to Out or Out to IndefInvert()# Invert the data in XDatadefAt(index):PinBind# Get the pin at index, e.g., x = a.At(12).Get() or a.At(12).Set(1)defAsBinaryString()# Convert XData's data to a binary string, e.g., "1001011"
To simplify assignment operations, XData has overloaded property assignment for Set(value) and U() methods, allowing assignments and retrievals with pin.value = x and x = pin.value.
# Access with .value# a is of XData typea.value=12345# Decimal assignmenta.value=0b11011# Binary assignmenta.value=0o12345# Octal assignmenta.value=0x12345# Hexadecimal assignmenta.value=-1# Assign all bits to 1, a.value = x is equivalent to a.Set(x)a[31]=0# Assign value to bit 31a.value="x"# Assign high impedance statea.value="z"# Assign unknown statex=a.value# Retrieve value, equivalent to x = a.U()
XPort Class
Directly operating on XData is clear and intuitive when dealing with a few pins. However, managing multiple XData instances can be cumbersome. XPort is a wrapper around XData that allows centralized management of multiple XData instances. It also provides methods for convenient batch management.
Initialization and Adding Pins
port=XPort("p")# Create an XPort instance with prefix p
Main Methods
classXPort:defXPort(prefix="")# Create a port with prefix prefix, e.g., p = XPort("tile_link_")defPortCount():int# Get the number of pins in the port (i.e., number of bound XData instances)defAdd(pin_name,XData)# Add a pin, e.g., p.Add("reset", dut.reset)defDel(pin_name)# Delete a pindefConnect(xport2)# Connect two portsdefNewSubPort(std::stringsubprefix):XPort# Create a sub-port with all pins starting with subprefixdefGet(key,raw_key=False):XData# Get XDatadefSetZero()# Set all XData in the port to 0
XClock Class
XClock is a wrapper for the circuit clock used to drive the circuit. In traditional simulation tools (e.g., Verilator), you need to manually assign values to clk and update the state using functions like step_eval. Our tool provides methods to bind the clock directly to XClock, allowing the Step() method to simultaneously update the clk and circuit state.
Initialization and Adding Pins
# Initializationclk=XClock(stepfunc)# Parameter stepfunc is the circuit advancement method provided by DUT backend, e.g., Verilator's step_eval
Main Methods
classXClock:defAdd(xdata)# Bind Clock with xdata, e.g., clock.Add(dut.clk)defAdd(xport)# Bind Clock with XDatadefRefreshComb()# Advance circuit state without advancing time or dumping waveformdefRefreshCombT()# Advance circuit state (advance time and dump waveform)defStep(ints=1)# Advance the circuit by s clock cycles, DUT.Step = DUT.xclock.StepdefStepRis(func,args=(),kwargs={})
2.2 - Waveform Generation
Generate circuit waveforms.
Usage
When using the Picker tool to encapsulate the DUT, use the -w [wave_file] option to specify the waveform file to be saved. Different waveform file types are supported for different backend simulators, as follows:
.fsdb format waveform file, a more efficient compressed file.
Note that if you choose to generate the libDPI_____.so file yourself, the waveform file format is not restricted by the above constraints. The waveform file format is determined when the simulator constructs libDPI.so, so if you generate it yourself, you need to specify the waveform file format using the corresponding simulator’s configuration.
Python Example
Normally, the DUT needs to be explicitly declared complete to notify the simulator to perform post-processing tasks (writing waveform, coverage files, etc.). In Python, after completing all tests, call the .Finish() method of the DUT to notify the simulator that the task is complete, and then flush the files to disk.
Using the Adder Example, the test program is as follows:
fromAdderimport*if__name__=="__main__":dut=DUTAdder()foriinrange(10):dut.a.value=i*2dut.b.value=int(i/4)dut.Step(1)print(dut.sum.value,dut.cout.value)dut.Finish()# flush the wave file to disk
After the run is completed, the waveform file with the specified name will be generated.
Viewing Results
GTKWave
Use GTKWave to open fst or vcd waveform files to view the waveform.
Verdi
Use Verdi to open fsdb or vcd waveform files to view the waveform.
2.3 - Multi-File Input
Handling multiple Verilog source files
Multi-File Input and Output
In many cases, a module in one file may instantiate modules in other files. In such cases, you can use the picker tool’s -f option to process multiple Verilog source files. For example, suppose you have three source files: Cache.sv, CacheStage.sv, and CacheMeta.sv:
File List
Cache.sv
// In
moduleCache(...);CacheStages1(...);CacheStages2(...);CacheStages3(...);CacheMetacachemeta(...);endmodule
CacheStage.sv
// In CacheStage.sv
moduleCacheStage(...);...endmodule
CacheMeta.sv
// In CacheMeta.sv
moduleCacheMeta(...);...endmodule
Usage
In this case, the module under test is Cache, which is in Cache.sv. You can generate the DUT using the following command:
You can also use a .txt file to specify multiple input files:
picker export Cache.sv --fs src.txt --sname Cache
Where the contents of src.txt are:
CacheStage.sv
CacheMeta.sv
Notes
It is important to note that even when using multiple file inputs, you still need to specify the file containing the top-level module under test, as shown in the example above with Cache.sv.
When using multiple file inputs, Picker will pass all files to the simulator, which will compile them simultaneously. Therefore, it is necessary to ensure that the module names in all files are unique.
2.4 - Coverage Statistics
Coverage tools
The Picker tool supports generating code line coverage reports, and the MLVP(https://github.com/XS-MLVP/mlvp)project supports generating functional coverage reports.
Code Line Coverage
Currently, the Picker tool supports generating code line coverage reports based on the Verilator simulator.
Verilator
The Verilator simulator provides coverage support.
The implementation is as follows:
Use the verilator_coverage tool to process or merge coverage databases, ultimately generating a coverage.info file for multiple DUTs.
Use the genhtml command of the lcov tool based on coverage.info and RTL code source files to generate a complete code coverage report.
The process is as follows:
Enable the COVERAGE feature when generating the DUT with Picker (add the -c option).
After the simulator runs, a coverage database file V{DUT_NAME}.dat will be generated after dut.Finish() is called.
Use the write-info function of verilator_coverage to convert it to a .info file.
Use the genhtml function of lcov to generate an HTML report using the .info file and the RTL source files specified in the file.
Note: The RTL source files specified in the file refer to the source file paths used when generating the DUT, and these paths need to be valid in the current environment. In simple terms, all .sv/.v files used for compilation need to exist in the current environment, and the directory remains unchanged.
verilator_coverage
The verilator_coverage tool is used to process coverage data generated by the DUT after running .dat files. The tool can process and merge multiple .dat files and has two main functions:
Generate a .info file based on the .dat file for subsequent generation of a web page report.
-annotate <output_dir>:Present the coverage situation in the source file in annotated form, and save the result to output_dir. The format is as follows:
100000inputlogica;// Begins with whitespace, because
// number of hits (100000) is above the limit.
%000000inputlogicb;// Begins with %, because
// number of hits (0) is below the limit.
-annotate-min <count>:Specify the limit as count for the above.
Combine the .dat file with the source code file, and write the coverage data in annotated form into the specified directory.
-write <merged-datafile> -read <datafiles>:Merge several .dat (datafiles) files into one .dat file.
-write-info <merged-info> -read <datafiles>:Merge several .dat (datafiles) files into one .info file.
genhtml
The genhtml provided by the lcov package can export a more readable HTML report from the .info file. The command format is: genhtml [OPTIONS] <infofiles>.
It is recommended to use the -o <outputdir> option to output the results to a specified directory.
If you enable the -c option when using Picker, after the simulation ends, a V{DUT_NAME}.dat file will be generated. And there will be a Makefile in the top-level directory, which contains the command to generate the coverage report.
Enter make coverage in the shell, which will generate coverage.info based on the generated .dat file and then use genhtml to generate an html report in the coverage directory.
VCS
Documentation for VCS is currently being finalized.
2.5 - Integrated Testing Framework
Available Software Testing Frameworks
In traditional chip verification practices, frameworks like UVM are widely adopted. Although they provide a comprehensive set of verification methodologies, they are typically confined to specific hardware description languages and simulation environments. Our tool breaks these limitations by converting simulation code into C++ or Python, allowing us to leverage software verification tools for more comprehensive testing.
Given Python’s robust ecosystem, this project primarily uses Python as an example, briefly introducing two classic software testing frameworks: Pytest and Hypothesis. Pytest handles various testing needs with its simple syntax and rich features. Meanwhile, Hypothesis enhances the thoroughness and depth of testing by generating test cases that uncover unexpected edge cases.
Our project is designed from the outset to be compatible with various modern software testing frameworks. We encourage you to explore the potential of these tools and apply them to your testing processes. Through hands-on practice, you will gain a deeper understanding of how these tools can enhance code quality and reliability. Let’s work together to improve the quality of chip development.
2.5.1 - PyTest
Used for managing tests and generating test reports.
Software Testing
Before we start with pytest, let’s understand software testing. Software testing generally involves the following four aspects:
Unit Testing: Also known as module testing, it involves checking the correctness of program modules, which are the smallest units in software design.
Integration Testing: Also known as assembly testing, it usually builds on unit testing by sequentially and incrementally testing all program modules, focusing on the interface parts of different modules.
System Testing: It treats the entire software system as a whole for testing, including testing the functionality, performance, and the software’s running environment.
Acceptance Testing: Refers to testing the entire system according to the project task book, contract, and acceptance criteria agreed upon by both the supply and demand sides, to determine whether to accept or reject the system.
pytest was initially designed as a unit testing framework, but it also provides many features that allow it to be used for a wider range of testing, including integration testing and system testing. It is a very mature full-featured Python testing framework.
It simplifies test writing and execution by collecting test functions and modules and providing a rich assertion library. It is a very mature and powerful Python testing framework with the following key features:
Simple and Flexible: Pytest is easy to get started with and is flexible.
Supports Parameterization: You can easily provide different parameters for test cases.
Full-featured: Pytest not only supports simple unit testing but can also handle complex functional testing. You can even use it for automation testing, such as Selenium or Appium testing, as well as interface automation testing (combining Pytest with the Requests library).
Rich Plugin Ecosystem: Pytest has many third-party plugins, and you can also customize extensions. Some commonly used plugins include:
pytest-selenium: Integrates Selenium.
pytest-html: Generates HTML test reports.
pytest-rerunfailures: Repeats test cases in case of failure.
pytest-xdist: Supports multi-CPU distribution.
Well Integrated with Jenkins.
Supports Allure Report Framework.
This article will briefly introduce the usage of pytest based on testing requirements. The complete manual is available here for students to study in depth.
# When using pytest, our module names are usually prefixed with test or end with test. You can also modify the configuration file to customize the naming convention.# test_*.py or *_test.pytest_demo1demo2_test# The class name in the module must start with Test and cannot have an init method.classTestDemo1:classTestLogin:# The test methods defined in the class must start with test_test_demo1(self)test_demo2(self)# Test Caseclasstest_one:deftest_demo1(self):print("Test Case 1")deftest_demo2(self):print("Test Case 2")
Pytest Parameters
pytest supports many parameters, which can be viewed using the help command.
pytest -help
Here are some commonly used ones:
-m: Specify multiple tag names with an expression. pytest provides a decorator @pytest.mark.xxx for marking tests and grouping them (xxx is the group name you defined), so you can quickly select and run them, with different groups separated by and or or.
-v: Outputs more detailed information during runtime. Without -v, the runtime does not display the specific test case names being run; with -v, it prints out the specific test cases in the console.
-q: Similar to the verbosity in unittest, used to simplify the runtime output. When running tests with -q, only simple runtime information is displayed, for example:
.s.. [100%]3 passed, 1 skipped in 9.60s
-k: You can run specified test cases using an expression. It is a fuzzy match, with and or or separating keywords, and the matching range includes file names, class names, and function names.
-x: Exit the test if one test case fails. This is very useful for debugging. When a test fails, stop running the subsequent tests.
-s: Display print content. When running test scripts, we often add some print content for debugging or printing some content. However, when running pytest, this content is not displayed. If you add -s, it will be displayed.
pytest test_se.py -s
Selecting Test Cases to Execute with Pytest
In Pytest, you can select and execute test cases based on different dimensions such as test folders, test files, test classes, and test methods.
Execute by test folder
# Execute all test cases in the current folder and subfolderspytest.# Execute all test cases in the tests folder and its subfolders, which are at the same level as the current folderpytest../tests# Execute by test file# Run all test cases in test_se.pypytesttest_se.py# Execute by test class, must be in the following format:pytestfile_name.py::TestClass,where"::"istheseparatorusedtoseparatethetestmoduleandtestclass.# Run all test cases under the class named TestSE in the test_se.py filepytesttest_se.py::TestSE# Execute by test method, must be in the following format:pytestfile_name.py::TestClass::TestMethod,where"::"istheseparatorusedtoseparatethetestmodule,testclass,andtestmethod.# Run the test case named test_get_new_message under the class named TestSE in the test_se.py filepytesttest_se.py::TestSE::test_get_new_message# The above methods of selecting test cases are all on the **command line**. If you want to execute directly in the test program, you can directly call pytest.main(), the format is:pytest.main([module.py::class::method])
In addition, Pytest also supports multiple ways to control the execution of test cases, such as filtering execution, running in multiple processes, retrying execution, etc.
Writing Validation with Pytest
During testing, we use the previously validated adder. Go to the Adder folder, create a new test_adder.py file in the picker_out_adder directory, with the following content:
# Import test modules and required librariesfromAdderimport*importpytestimportctypesimportrandom# Use pytest fixture to initialize and clean up resources@pytest.fixturedefadder():# Create an instance of DUTAdder, load the dynamic link librarydut=DUTAdder()# Execute one clock step to prepare the DUTdut.Step(1)# The code after the yield statement will be executed after the test ends, used to clean up resourcesyielddut# Clean up DUT resources and generate test coverage reports and waveformsdut.Finish()classTestFullAdder:# Define full_adder as a static method, as it does not depend on class instances@staticmethoddeffull_adder(a,b,cin):cin=cin&0b1Sum=ctypes.c_uint64(a).valueSum+=ctypes.c_uint64(b).value+cinCout=(Sum>>64)&0b1Sum&=0xffffffffffffffffreturnSum,Cout# Use the pytest.mark.usefixtures decorator to specify the fixture to use@pytest.mark.usefixtures("adder")# Define the test method, where adder is injected by pytest through the fixturedeftest_adder(self,adder):# Perform multiple random testsfor_inrange(114514):# Generate random 64-bit a, b, and 1-bit cina=random.getrandbits(64)b=random.getrandbits(64)cin=random.getrandbits(1)# Set the input of the DUTadder.a.value=aadder.b.value=badder.cin.value=cin# Execute one clock stepadder.Step(1)# Calculate the expected result using a static methodsum,cout=self.full_adder(a,b,cin)# Assert that the output of the DUT is the same as the expected resultassertsum==adder.sum.valueassertcout==adder.cout.valueif__name__=="__main__":pytest.main(['-v','test_adder.py::TestFullAdder'])
The successful test indicates that after 114514 loops, our device has not found any bugs for now. However, using randomly generated test cases with multiple loops consumes a considerable amount of resources, and these randomly generated test cases may not effectively cover all boundary conditions. In the next section, we will introduce a more efficient method for generating test cases.
2.5.2 - Hypothesis
Can Be Used to Generate Stimuli
Hypothesis
In the previous section, we manually wrote test cases and specified inputs and expected outputs for each case. This method has some issues, such as incomplete test case coverage and the tendency to overlook boundary conditions. Hypothesis is a Python library for property-based testing. Its main goal is to make testing simpler, faster, and more reliable. It uses a method called property-based testing, where you can write some hypotheses for your code, and Hypothesis will automatically generate test cases to verify these hypotheses. This makes it easier to write comprehensive and efficient tests. Hypothesis can automatically generate various types of input data, including basic types (e.g., integers, floats, strings), container types (e.g., lists, sets, dictionaries), and custom types. It tests based on the properties (assertions) you provide. If a test fails, it will try to narrow down the input data to find the smallest failing case. With Hypothesis, you can better cover the boundary conditions of your code and uncover errors you might not have considered. This helps improve the quality and reliability of your code.
Basic Concepts
Test Function: The function or method to be tested.
Properties: Conditions that the test function should satisfy. Properties are applied to the test function as decorators.
Strategy: A generator for test data. Hypothesis provides a range of built-in strategies, such as integers, strings, lists, etc. You can also define custom strategies.
Test Generator: A function that generates test data based on strategies. Hypothesis automatically generates test data and passes it as parameters to the test function.
This article will briefly introduce Hypothesis based on testing requirements. The complete manual is available for in-depth study.
Installation
Install with pip and import in Python to use:
pip install hypothesis
import hypothesis
Basic Usage
Properties and Strategies
Hypothesis uses property decorators to define the properties of test functions. The most common decorator is @given, which specifies the properties the test function should satisfy.
We can define a test function test_addition using the @given decorator and add properties to x. The test generator will automatically generate test data for the function and pass it as parameters, for example:
In this example, integers() is a built-in strategy for generating integer test data. Hypothesis offers a variety of built-in strategies for generating different types of test data. Besides integers(), there are strategies for strings, booleans, lists, dictionaries, etc. For instance, using the text() strategy to generate string test data and using lists(text()) to generate lists of strings:
When using Hypothesis for testing, we can use standard Python assertions to verify the properties of the test function. Hypothesis will automatically generate test data and run the test function based on the properties defined in the decorator. If an assertion fails, Hypothesis will try to narrow down the test data to find the smallest failing case.
Suppose we have a string reversal function. We can use an assert statement to check if reversing a string twice equals itself:
Tests in Hypothesis consist of two parts: a function that looks like a regular test in your chosen framework but with some extra parameters, and a @given decorator specifying how to provide those parameters. Here’s an example of how to use it to verify a full adder, which we tested previously:
Based on the previous section’s code, we modify the method of generating test cases from random numbers to the integers() method. The modified code is as follows:
fromAdderimport*importpytestimportctypesimportrandomfromhypothesisimportgiven,strategiesasst# Initializing and Cleaning Up Resources Using pytest FixturefromAdderimport*importpytestimportctypesfromhypothesisimportgiven,strategiesasst# Using pytest fixture to initialize and clean up resources@pytest.fixture(scope="class")defadder():# Create DUTAdder instance and load dynamic librarydut=DUTAdder()# Perform a clock step to prepare the DUTdut.Step(1)# Code after yield executes after tests finish, for cleanupyielddut# Clean up DUT resources and generate coverage report and waveformdut.Finish()classTestFullAdder:# Define full_adder as a static method, as it doesn't depend on class instance@staticmethoddeffull_adder(a,b,cin):cin=cin&0b1Sum=ctypes.c_uint64(a).valueSum+=ctypes.c_uint64(b).value+cinCout=(Sum>>64)&0b1Sum&=0xffffffffffffffffreturnSum,Cout# Use Hypothesis to automatically generate test cases@given(a=st.integers(min_value=0,max_value=0xffffffffffffffff),b=st.integers(min_value=0,max_value=0xffffffffffffffff),cin=st.integers(min_value=0,max_value=1))# Define test method, adder parameter injected by pytest via fixturedeftest_full_adder_with_hypothesis(self,adder,a,b,cin):# Calculate expected sum and carrysum_expected,cout_expected=self.full_adder(a,b,cin)# Set DUT inputsadder.a.value=aadder.b.value=badder.cin.value=cin# Perform a clock stepadder.Step(1)# Assert DUT outputs match expected resultsassertsum_expected==adder.sum.valueassertcout_expected==adder.cout.valueif__name__=="__main__":# Run specified tests in verbose modepytest.main(['-v','test_adder.py::TestFullAdder'])
In this example, the @given decorator and strategies are used to generate random data that meets specified conditions. st.integers() is a strategy for generating integers within a specified range, used to generate numbers between 0 and 0xffffffffffffffff for a and b, and between 0 and 1 for cin. Hypothesis will automatically rerun this test multiple times, each time using different random inputs, helping reveal potential boundary conditions or edge cases.
As we can see, the tests were completed in a short amount of time.
3 - Verification Basics
Introduction to the basic knowledge required for working with the open verification platform.
Introduction to chip verification using the Guoke Cache as an example, covering the basic verification process and report writing.
3.1 - Chip Verification
Basic concepts of chip verification
This page provides a brief introduction to chip verification, including concepts used in examples such as DUT (Design Under Test) and RM (Reference Model).
The chip verification process needs to align with the actual situation of the company or team. There is no absolute standard that meets all requirements and must be referenced.
What is Chip Verification?
The chip design-to-production process involves three main stages: chip design, chip manufacturing, and chip packaging/testing. Chip design is further divided into front-end and back-end design. Front-end design, also known as logic design, aims to achieve the desired circuit logic functionality. Back-end design, or physical design, focuses on optimizing layout and routing to reduce chip area, lower power consumption, and increase frequency. Chip verification is a critical step in the chip design process. Its goal is to ensure that the designed chip meets the specified requirements in terms of functionality, performance, and power consumption. The verification process typically includes functional verification, timing verification, and power verification, using methods and tools such as simulation, formal verification, hardware acceleration, and prototyping. For this tutorial, chip verification refers only to the verification of the front-end design to ensure that the circuit logic meets the specified requirements (“Does this proposed design do what is intended?”), commonly known as functional verification. This does not include back-end design aspects like power and frequency.
For chip products, design errors that make it to production can be extremely costly to fix, as it might require recalling products and remanufacturing chips, incurring significant financial and time costs. Here are some classic examples of failures due to inadequate chip verification:
Intel Pentium FDIV Bug:In 1994, Intel’s Pentium processor was found to have a severe division error known as the FDIV bug. This error was due to incorrect entries in a lookup table within the chip’s floating-point unit. Although it rarely affected most applications, it caused incorrect results in specific calculations. Intel had to recall a large number of processors, leading to significant financial losses.
Ariane 5 Rocket Failure:Though not a chip example, this highlights the importance of hardware verification. In 1996, the European Space Agency’s Ariane 5 rocket exploded shortly after launch due to an overflow when converting a 64-bit floating-point number to a 16-bit integer in the navigation system, causing the system to crash. This error went undetected during design and led to the rocket’s failure.
AMD Barcelona Bug:In 2007, AMD’s Barcelona processor had a severe Translation Lookaside Buffer (TLB) error that could cause system crashes or reboots. AMD had to mitigate this by lowering the processor’s frequency and releasing BIOS updates, which negatively impacted their reputation and financial status.
These cases emphasize the importance of chip verification. Errors detected and fixed during the design phase can prevent these costly failures. Insufficient verification continues to cause issues today, such as a new entrant in the ASIC chip market rushing a 55nm chip without proper verification, leading to three failed tape-outs and approximately $500,000 in losses per failure.
Chip Verification Process
The coupling relationship between chip design and verification is shown in the diagram above. Both design and verification have the same input: the specification document. Based on this document, both design and verification teams independently code according to their understanding and requirements. The design team needs to ensure that the RTL code is “synthesizable,” considering circuit characteristics, while the verification team mainly focuses on whether the functionality meets the requirements, with fewer coding constraints. After both teams complete module development, a sanity test is conducted to check if the functionality matches. If there are discrepancies, collaborative debugging is done to identify and fix issues before retesting. Due to the high coupling between chip design and verification, some companies directly couple their design and verification teams, assigning verification teams to each design submodule. The coupling process in the diagram is coarse-grained, with specific chips (e.g., SoC, DDR) and companies having their cooperation models.
In the above comparison test, the module produced by the design team is usually called DUT (Design Under Test), while the model developed by the verification team is called RM (Reference Model). The verification process includes: writing a verification plan, creating a verification platform, organizing functional points, constructing test cases, running and debugging, collecting bugs/coverage, regression testing, and writing test reports.
Verification Plan: The verification plan describes how verification will be carried out and how verification quality will be ensured to meet functional verification requirements. It typically includes verification goals, strategies, environment, items, process, risk mitigation, resources, schedule, results, and reports. Verification goals specify the functions or performance metrics to be verified, directly extracted from the chip specification. Verification strategy outlines the methods to be used, such as simulation, formal verification, FPGA acceleration, etc., and how to organize the verification tasks. The verification environment details the specific testing environment, including verification tools and versions. The verification item library lists specific items to be verified and expected results. Verification plans can be general or specific to sub-tasks.
Platform Setup: The verification platform is the execution environment for specific verification tasks. Similar verification tasks can use the same platform. Setting up the platform is a key step, including choosing verification tools (e.g., software simulation, formal verification, hardware acceleration), configuring the environment (e.g., server, FPGA), creating the test environment, and basic test cases. Initial basic test cases are often called “smoke tests.” Subsequent test codes are based on this platform, so it must be reusable. The platform includes the test framework, the code being tested, and basic signal stimuli.
Organizing Functional Points: This involves listing the DUT’s basic functions based on the specification manual and detailing how to test each function. Functional points are prioritized based on importance, risk, and complexity. They also need to be tracked for status, with updates synchronized to the plan if changes occur.
Test Cases These are conditions or variables used to determine if the DUT meets specific requirements and operates correctly. Each case includes test conditions, input data, expected results, actual results, and test outcomes. Running test cases and comparing expected vs. actual results help verify the system or application’s correct implementation of functions or requirements. Test cases are crucial tools for verifying chip design against specifications.
Coding Implementation: This is the execution of test cases, including generating test data, selecting the test framework, programming language, and writing the reference model. This phase requires a deep understanding of functional points and test cases. Misunderstandings can lead to the DUT being undrivable or undetected bugs.
Collecting Bugs/Coverage: The goal of verification is to find design bugs early, so collected bugs need unique identifiers, severity ratings, and status tracking with design engineers. Discovering bugs is ideal, but since not every test finds bugs, coverage is another metric to evaluate verification thoroughness. Sufficient verification is indicated when coverage (e.g., code coverage >90%) exceeds a threshold.
Regression Testing: As verification and design are iterative, regression tests ensure the modified DUT still functions correctly after bug fixes. This catches new errors or reactivates old ones due to changes. Regression tests can be comprehensive or selective, covering all functions or specific parts.
Test Report: This summarizes the entire verification process, providing a comprehensive view of the testing activities, including objectives, executed test cases, discovered issues, coverage, and efficiency.
Levels of Chip Verification
Chip verification typically includes four levels based on the object size: UT, BT, IT, and ST.
Unit Testing(UT): The lowest verification level, focusing on single modules or components to ensure their functionality is correct.
Block Testing (BT) : Modules often have tight coupling, making isolated UT testing complex. BT merges several coupled modules into one DUT block for testing.
Integration Testing (IT) : Builds on UT by combining multiple modules or components to verify their collaborative functionality, usually testing subsystem functionality.
System Testing (ST) : Also called Top verification, ST combines all modules or components into a complete system to verify overall functionality and performance requirements.
In theory, these levels follow a bottom-up order, each building on the previous level. However, practical verification activities depend on the scale, expertise, and functional needs of the enterprise, so not all levels are always involved. At each level, relevant test cases are written, tests run, and results analyzed to ensure the chip design’s correctness and quality.
Chip Verification Metrics
Verification metrics typically include functional correctness, test coverage, defect density, verification efficiency, and verification cost. Functional correctness is the fundamental metric, ensuring the chip executes its designed functions correctly. This is validated through functional test cases, including normal and robustness tests. Test coverage indicates the extent to which test cases cover design functionality, with higher coverage implying higher verification quality. Coverage can be further divided into code coverage, functional coverage, condition coverage, etc. Defect density measures the number of defects found in a given design scale or code volume, with lower density indicating higher design quality. Verification efficiency measures the amount of verification work completed within a given time and resource frame, with higher efficiency indicating higher productivity. Verification cost encompasses all resources required for verification, including manpower, equipment, and time, with lower costs indicating higher cost-effectiveness.
Functional correctness is the absolute benchmark for verification. However, in practice, it is often impossible to determine if the test plan is comprehensive and if all test spaces have been adequately covered. Therefore, a quantifiable metric is needed to guide whether verification is sufficient and when it can be concluded. This metric is commonly referred to as “test coverage.” Test coverage typically includes code coverage (lines, functions, branches) and functional coverage.
Code Line Coverage: This indicates how many lines of the DUT design code were executed during testing.
Function Coverage: This indicates how many functions of the DUT design code were executed during testing.
Branch Coverage: This indicates how many branches (if-else) of the DUT design code were executed during testing.
Functional Coverage: This indicates how many predefined functions were triggered during testing.
High code coverage can improve the quality and reliability of verification but does not guarantee complete correctness since it cannot cover all input and state combinations. Therefore, in addition to pursuing high code coverage, other testing methods and metrics, such as functional testing, performance testing, and defect density, should be combined.
Chip Verification Management
Chip verification management is a comprehensive process that encompasses all activities in the chip verification process, including the development of verification strategies, the setup of the verification environment, the writing and execution of test cases, the collection and analysis of results, and the tracking and resolution of issues and defects. The goal of chip verification management is to ensure that the chip design meets all functional and performance requirements, as well as specifications and standards.
In chip verification management, the first step is to formulate a detailed verification strategy, including objectives, scope, methods, and schedules. Then, a suitable verification environment must be set up, including hardware, software tools, and test data. Next, a series of test cases covering all functional and performance points must be written and executed, with results collected and analyzed to identify problems and defects. Finally, these issues and defects need to be tracked and fixed until all test cases pass.
Chip verification management is a complex process requiring a variety of skills and knowledge, including chip design, testing methods, and project management. It requires close collaboration with other activities, such as chip design, production, and sales, to ensure the quality and performance of the chip. The effectiveness of chip verification management directly impacts the success of the chip and the company’s competitiveness. Therefore, chip verification management is a crucial part of the chip development process.
The chip verification management process can be based on a “project management platform” and a “bug management platform,” with platform-based management typically being significantly more efficient than manual management.
Current State of Chip Verification
Currently, chip verification is typically completed within chip design companies. This process is not only technically complex but also entails significant costs. Given the close relationship between acceptance and design, chip verification inevitably involves the source code of the chip design. However, chip design companies usually consider the source code as a trade secret, necessitating internal personnel to perform the verification, making outsourcing difficult.
The importance of chip verification lies in ensuring that the designed chip operates reliably under various conditions. Verification is not only for meeting technical specifications but also for addressing the growing complexity and emerging technology demands. As the semiconductor industry evolves, the workload of chip verification has been continuously increasing, especially for complex chips, where verification work has exceeded design work, accounting for more than 70%. This means that in terms of engineer personnel ratio, verification engineers are usually twice the number of design engineers (e.g., in a team of three thousand at Zeku, there are about one thousand design engineers and two thousand verification engineers. Similar or higher ratios apply to other large chip design companies).
Due to the specificity of verification work, which requires access to the chip design source code, it significantly limits the possibility of outsourcing chip verification. The source code is considered the company’s core trade secret, involving technical details and innovations, thus making it legally and securely unfeasible to share with external parties. Consequently, internal personnel must shoulder the verification work, increasing the internal workload and costs.
Given the current situation, the demand for chip verification engineers continues to grow. They need a solid technical background, familiarity with various verification tools and methods, and keen insight into emerging technologies. Due to the complexity of verification work, verification teams typically need a large scale, contrasting sharply with the design team size.
To meet this challenge, the industry may need to continuously explore innovative verification methods and tools to improve efficiency and reduce costs.
Summary: Complex Chip Verification Costs
High Verification Workload: For complex chips, verification work accounts for over 70% of the entire chip design work.
High Labor Costs: The number of verification engineers is twice that of design engineers, with complex tasks requiring thousands of engineers.
Internal Verification: To ensure trade secrets (chip design code) are not leaked, chip design companies can only hire a large number of verification engineers to perform verification work internally.
Crowdsourcing Chip Verification
In contrast to hardware, the software field has already made testing outsourcing (subcontracting) a norm to reduce testing costs. This business is highly mature, with a market size in the billions of yuan, advancing towards the trillion-yuan scale. From the content perspective, software testing and hardware verification share significant similarities (different targets with the same system objective). Is it feasible to subcontract hardware verification in the same way as software?
Crowdsourcing chip verification faces many challenges, such as:
Small Number of Practitioners: Compared to the software field, the number of hardware developers is several orders of magnitude smaller. For instance, according to GitHub statistics (https://madnight.github.io/githut/#/pull_requests/2023/2), traditional software programming languages (Python, Java, C++, Go) account for nearly 50%, whereas hardware description languages like Verilog account for only 0.076%, reflecting the disparity in developer numbers.
Commercial Verification Tools: The verification tools used in enterprises (simulators, formal verification, data analysis) are almost all commercial tools, which are nearly invisible to ordinary people and difficult to self-learn.
Lack of Open Learning Materials: Chip verification involves accessing the chip design source code, which is typically regarded as the company’s trade secrets and proprietary technology. Chip design companies may be unwilling to disclose detailed verification processes and techniques, limiting the availability of learning materials.
Feasibility Analysis
Although the chip verification field has been relatively closed, from a technical perspective, adopting a subcontracting approach for verification is a feasible option due to several factors:
Firstly, with the gradual increase of open-source chip projects, the source code involved in verification has become more open and transparent. These open-source projects do not have concerns about trade secrets in their design and verification process, providing more possibilities for learning and research. Even if some projects involve trade secrets, encryption and other methods can be used to hide design codes, addressing trade secret issues to a certain extent and making verification easier to achieve.
Secondly, many fundamental verification tools have emerged in the chip verification field, such as Verilator and SystemC. These tools provide robust support for verification engineers, helping them perform verification work more efficiently. These tools alleviate some of the complexity and difficulty of the verification process, providing a more feasible technical foundation for adopting subcontracted verification methods.
In the open-source software field, some successful cases can be referenced. For example, the Linux kernel verification process adopts a subcontracting approach, with different developers and teams responsible for verifying different modules, ultimately forming a complete system. Similarly, in the machine learning field, the ImageNet project adopted a crowdsourced annotation strategy, completing large-scale image annotation tasks through crowdsourcing. These cases provide successful experiences for the chip verification field, proving the potential of subcontracted verification to improve efficiency and reduce costs.
Therefore, despite the chip verification field being relatively closed compared to other technical fields, technological advances and the increase of open-source projects offer new possibilities for adopting subcontracted verification. By drawing on successful experiences from other fields and utilizing existing verification tools, we can promote the application of more open and efficient verification methods in chip verification, further advancing the industry. This openness and flexibility in technology will provide more choices for verification engineers, promoting innovative and diverse development in the chip verification field.
Technical Route
To overcome challenges and engage more people in chip verification, this project continuously attempts the following technical directions:
Provide Multi-language Verification Tools: Traditional chip verification is based on the System Verilog programming language, which has a small user base. To allow other software development/testing professionals to participate in chip verification, this project provides multi-language verification conversion tools Picker, enabling verifiers to use familiar programming languages (e.g., C++, Python, Java, Go) with open-source verification tools.
Provide Verification Learning Materials: The scarcity of chip verification learning materials is mainly due to the improbability of commercial companies disclosing internal data. Therefore, this project will continuously update learning materials, allowing verifiers to learn the necessary skills online for free.
Provide Real Chip Verification Cases: To make the learning materials more practical, this project uses the “Xiangshan Kunming Lake (an industrial-grade high-performance RISC-V processor) IP core” as a basis, continuously updating verification cases by extracting modules from it.
Organize Chip Design Subcontracted Verification: Applying what is learned is the goal of every learner. Therefore, this project periodically organizes subcontracted chip design verification, allowing everyone (whether you are a university student, verification expert, software developer, tester, or high school student) to participate in real chip design work.
The goal of this project is to achieve the following vision: “Open the black box of traditional verification modes, allowing anyone interested to participate in chip verification anytime, anywhere, using their preferred programming language.”
3.2 - Digital Circuits
Basic concepts of digital circuits
This page introduces the basics of digital circuits. Digital circuits use digital signals and are the foundation of most modern computers.
What Are Digital Circuits
Digital circuits are electronic circuits that use two discrete voltage levels to represent information. Typically, digital circuits use two power supply voltages to indicate high (H) and low (L) levels, representing the digits 1 and 0 respectively. This representation uses binary signals to transmit and process information.
Most digital circuits are built using field-effect transistors, with MOSFETs (Metal-Oxide-Semiconductor Field-Effect Transistors) being the most common. MOSFETs are semiconductor devices that control current flow using an electric field, enabling digital signal processing.
In digital circuits, MOSFETs are combined to form various logic gates like AND, OR, and NOT gates. These logic gates are combined in different ways to create the various functions and operations in digital circuits. Here are some key features of digital circuits:
(1) Voltage Representation: Digital circuits use two voltage levels, high and low, to represent digital information. Typically, a high level represents the digit 1, and a low level represents the digit 0.
(2) MOSFET Implementation: MOSFETs are one of the most commonly used components in digital circuits. By controlling the on and off states of MOSFETs, digital signal processing and logic operations can be achieved.
(3) Logic Gate Combinations: Logic gates, composed of MOSFETs, are the basic building blocks of digital circuits. By combining different logic gates, complex digital circuits can be built to perform various logical functions.
(4) Binary Representation: Information in digital circuits is typically represented using the binary system. Each digit can be made up of a series of binary bits, which can be processed and operated on within digital circuits.
(5) Signal Processing: Digital circuits convert and process signals through changes in voltage and logic operations. This discrete processing method makes digital circuits well-suited for computing and information processing tasks.
Why Learn Digital Circuits
Learning digital circuits is fundamental and necessary for the chip verification process, primarily for the following reasons:
(1) Understanding Design Principles: Digital circuits are the foundation of chip design. Knowing the basic principles and design methods of digital circuits is crucial for understanding the structure and function of chips. The goal of chip verification is to ensure that the designed digital circuits work according to specifications in actual hardware, and understanding digital circuits is key to comprehending the design.
(2) Design Standards: Chip verification typically involves checking whether the design meets specific standards and functional requirements. Learning digital circuits helps in understanding these standards, thus building better test cases and verification processes to ensure thorough and accurate verification.
(3) Timing and Clocks: Timing issues are common challenges in digital circuit design and verification. Learning digital circuits helps in understanding concepts of timing and clocks, ensuring that timing issues are correctly handled during verification, avoiding timing delays and conflicts in the circuit.
(4) Logical Analysis: Chip verification often involves logical analysis to ensure circuit correctness. Learning digital circuits fosters a deep understanding of logic, aiding in logical analysis and troubleshooting.
(5) Writing Test Cases: In chip verification, various test cases need to be written to ensure design correctness. Understanding digital circuits helps in designing comprehensive and targeted test cases, covering all aspects of the circuit.
(6) Signal Integrity: Learning digital circuits helps in understanding signal propagation and integrity issues within circuits. Ensuring proper signal transmission under different conditions is crucial, especially in high-speed designs.
Overall, learning digital circuits provides foundational knowledge and tools for chip verification, enabling verification engineers to better understand designs, write effective test cases, analyze verification results, and troubleshoot issues. Theoretical and practical experience with digital circuits is indispensable for chip verification engineers.
Digital Circuits Basics
You can learn digital circuits through the following online resources:
Hardware Description Languages (HDL) are languages used to describe digital circuits, systems, and hardware. They allow engineers to describe hardware structure, function, and behavior through text files, enabling abstraction and modeling of hardware designs.
HDL is commonly used for designing and simulating digital circuits such as processors, memory, controllers, etc. It provides a formal method to describe the behavior and structure of hardware circuits, making it easier for design engineers to perform hardware design, verification, and simulation.
Common hardware description languages include:
Verilog:One of the most used HDLs, Verilog is an event-driven language widely used for digital circuit design, verification, and simulation.
VHDL:Another common HDL, VHDL is an object-oriented language offering richer abstraction and modular design methods.
SystemVerilog:An extension of Verilog, SystemVerilog introduces advanced features like object-oriented programming and randomized testing, making Verilog more suitable for complex system design and verification.
Chisel
Chisel is a modern, advanced hardware description language that differs from traditional Verilog and VHDL. It’s a hardware construction language based on Scala. Chisel offers a more modern and flexible way to describe hardware, leveraging Scala’s features to easily implement parameterization, abstraction, and reuse while maintaining hardware-level efficiency and performance.
Chisel’s features include:
Modern Syntax: Chisel’s syntax is more similar to software programming languages like Scala, making hardware description more intuitive and concise.
Parameterization and Abstraction: Chisel supports parameterization and abstraction, allowing for the creation of configurable and reusable hardware modules.
Type Safety: Based on Scala, Chisel has type safety features, enabling many errors to be detected at compile-time.
Generating Performance-Optimized Hardware: Chisel code can be converted to Verilog and then synthesized, placed, routed, and simulated by standard EDA toolchains to generate performance-optimized hardware.
Strong Simulation Support: Chisel provides simulation support integrated with ScalaTest and Firrtl, making hardware simulation and verification more convenient and flexible.
Chisel Example of a Full Adder
The circuit design is shown below:
Complete Chisel code:
packageexamplesimportchisel3._classFullAdderextendsModule{// Define IO ports
valio=IO(newBundle{vala=Input(UInt(1.W))// Input port 'a' of width 1 bit
valb=Input(UInt(1.W))// Input port 'b' of width 1 bit
valcin=Input(UInt(1.W))// Input port 'cin' (carry-in) of width 1 bit
valsum=Output(UInt(1.W))// Output port 'sum' of width 1 bit
valcout=Output(UInt(1.W))// Output port 'cout' (carry-out) of width 1 bit
})// Calculate sum bit (sum of a, b, and cin)
vals1=io.a^io.b// XOR operation between 'a' and 'b'
io.sum:=s1^io.cin// XOR operation between 's1' and 'cin', result assigned to 'sum'
// Calculate carry-out bit
vals3=io.a&io.b// AND operation between 'a' and 'b', result assigned to 's3'
vals2=s1&io.cin// AND operation between 's1' and 'cin', result assigned to 's2'
io.cout:=s2|s3// OR operation between 's2' and 's3', result assigned to 'cout'
}
Using Guoke Cache as an example, this document introduces how to create a DUT based on Chisel.
Using Guoke Cache as an example, this document introduces how to create a DUT based on Chisel.
In this document, a DUT (Design Under Test) refers to the circuit or system being verified during the chip verification process. The DUT is the primary subject of verification. When creating a DUT based on the picker tool, it is essential to consider the functionality, performance requirements, and verification goals of the subject under test. These goals may include the need for faster execution speed or more detailed test information. Generally, the DUT, written in RTL, is combined with its surrounding environment to form the verification environment (test_env), where test cases are written. In this project, the DUT is the Python module that needs to be tested and converted through RTL. Traditional RTL languages include Verilog, System Verilog, VHDL, etc. However, as an emerging RTL design language, (https://www.chisel-lang.org/) is playing an increasingly important role in RTL design due to its object-oriented features and ease of use. This chapter introduces how to create a DUT using the conversion of the cache source code from the Guoke Processor-NutShell to a Python module as an example.
Chisel and Guoke
Chisel is a high-level hardware construction language (HCL) based on the Scala language. Traditional HDLs describe circuits, while HCLs generate circuits, making them more abstract and advanced. The Stage package provided in Chisel can convert HCL designs into traditional HDL languages such as Verilog and System Verilog. With tools like Mill and Sbt, automation in development can be achieved.
Guoke is a sequential single-issue processor implementation based on the RISC-V RV64 open instruction set, modularly designed using the Chisel language. For a more detailed introduction to Guoke, please refer to the link: https://oscpu.github.io/NutShell-doc/.
Guoke cache
The Guoke Cache (Nutshell Cache) is the cache module used in the Guoke processor. It features a three-stage pipeline design. When the third stage pipeline detects that the current request is MMIO or a refill occurs, it will block the pipeline. The Guoke Cache also uses a customizable modular design that can generate different-sized L1 Caches or L2 Caches by changing parameters. Additionally, the Guoke Cache has a coherence interface to handle coherence-related requests.
Chisel to Verilog
The stage library in Chisel helps generate traditional HDL code such as Verilog and System Verilog from Chisel code. Below is a brief introduction on how to convert a cache implementation based on Chisel into the corresponding Verilog circuit description.
Initializing the Guoke Environment
First, download the entire Guoke source code from the source repository and initialize it:
mkdir cache-ut
cd cache-ut
git clone https://github.com/OSCPU/NutShell.git
cd NutShell && git checkout 97a025d
make init
Creating Scala Compilation Configuration
Then, create build.sc in the cache-ut directory with the following content:
After creating the configuration information, create the src/main/scala source code directory according to the Scala specification. Then, in the source code directory, create nut_cache.scala and use the following code to instantiate the Cache and convert it into Verilog code:
After successfully executing the above command, a Verilog file Cache.v will be generated in the build directory. Then, the picker tool can be used to convert Cache.v into a Python module. Besides Chisel, almost all other HCL languages can generate corresponding RTL codes, so the basic process above also applies to other HCLs.
DUT Compilation
Generally, if you need the DUT to generate waveforms, coverage, etc., it will slow down the DUT’s execution speed. Therefore, when generating a Python module through the picker tool, it will be generated according to various configurations: (1) Turn off all debug information; (2) Enable waveforms; (3) Enable code line coverage. The first configuration aims to quickly build the environment for regression testing, etc.; the second is used to analyze specific errors, timing, etc.; the third is used to improve coverage.
3.4 - DUT Verification
Overview of the general verification process
This section introduces the general process of verifying a DUT based on Picker.
The goal of the open verification platform is functional verification, which generally involves the following steps:
1. Determine the verification object and goals
Typically, the design documentation of the DUT is also delivered to the verification engineer. At this point, you need to read the documentation or source code to understand the basic functions, main structure, and expected functionalities of the verification object.
2. Build the basic verification environment
After fully understanding the design, you need to build the basic verification environment. For example, in addition to the DUT generated by Picker, you may also need to set up a reference model for comparison and a signal monitoring platform for evaluating subsequent functional points.
With the test points, you need to construct test cases to cover the corresponding test points. A test case may cover multiple test points.
5. Collect test results
After running all the test cases, you need to summarize all the test results. Generally, this includes line coverage and functional coverage. The former can be obtained through the coverage function provided by the Picker tool, while the latter requires you to judge whether a function is covered by the test cases through monitoring the behavior of the DUT.
6. Evaluate the test results
Finally, you need to evaluate the obtained results, such as whether there are design errors, whether a function cannot be triggered, whether the design documentation description is consistent with the DUT behavior, and whether the design documentation is clearly described.
Next, we will introduce the general verification process usingMMIO read and write of Nutshell Cache as an example:
1 Determine the verification object and goals::
The MMIO read and write function of the Nutshell Cache. MMIO is a special type of IO mapping that supports accessing IO device registers by accessing memory addresses. Since the register state of IO devices can change at any time, it is not suitable to cache it. When receiving an MMIO request, the Nutshell cache will directly access the MMIO memory area to read or write data instead of querying hit/miss in the ordinary cache line.
2 Build the basic verification environment::
We can roughly divide the verification environment into five parts:
1. Testcase Driver:Responsible for generating corresponding signals driven by test cases
2. Monitor:Monitors signals to determine whether functions are covered and correct
3. Ref Cache:A simple reference model
4. Memory/MMIO Ram:Simulates peripheral devices to simulate corresponding cache requests
5. Nutshell Cache Dut:DUT generated by Picker
In addition, you may need to further encapsulate the DUT interface to achieve more convenient read and write request operations. For details, refer to Nutshll cachewrapper.
3 Decompose functional points and test points:
Nutshell cache can respond to MMIO requests, further decomposing into the following test points:
Test Point 1:MMIO requests will be forwarded to the MMIO port
Test Point 2:The cache will not issue burst transfer requests when responding to MMIO requests
Test Point 3:The cache will block the pipeline when responding to MMIO requests
4 Construct test cases:
The construction of test cases is simple. Knowing that the MMIO address range of the Nutshell cache obtained through Creating DUTis 0x30000000~0x7fffffff, we only need to access this memory range to obtain the expected MMIO results. Note that to trigger the test point of blocking the pipeline, you may need to initiate requests continuously.
Here is a simple test case:
# import CacheWrapper heredefmmio_test(cache:CacheWrapper):mmio_lb=0x30000000mmio_rb=0x30001000print("\n[MMIO Test]: Start MMIO Serial Test")foraddrinrange(mmio_lb,mmio_rb,16):addr&=~(0xf)addr1=addraddr2=addr+4addr3=addr+8cache.trigger_read_req(addr1)cache.trigger_read_req(addr2)cache.trigger_read_req(addr3)cache.recv()cache.recv()cache.recv()print("[MMIO Test]: Finish MMIO Serial Test")
5 Collect test results:
'''
In tb_cache.py
'''# import packages hereclassTestCache():defsetup_class(self):color.print_blue("\nCache Test Start")self.dut=DUTCache("libDPICache.so")self.dut.init_clock("clock")# Init here# ...self.testlist=["mmio_serial"]defteardown_class(self):self.dut.Finish()color.print_blue("\nCache Test End")def__reset(self):# Reset cache and devices# MMIO Testdeftest_mmio(self):if("mmio_serial"inself.testlist):# Run testfrom..test.test_mmioimportmmio_testmmio_test(self.cache,self.ref_cache)else:print("\nmmio test is not included")defrun(self):self.setup_class()# testself.test_mmio()self.teardown_class()passif__name__=="__main__":tb=TestCache()tb.run()
Run:
python3 tb_cache.py
The above is only a rough execution process, for details refer to:Nutshell Cache Verify。
6 Evaluate the running results
After the run is complete, the following data can be obtained:
Line coverage:
Functional coverage:
It can be seen that the preset MMIO functions are all covered and correctly triggered.
3.5 - Verification Report
An overview of the structure and content of the verification report.
After we complete the DUT verification, writing a verification report is a crucial step. This section will provide an overview of the structure of the verification report and the content that needs to be covered.
The verification report is a review of the entire verification process and an important supporting document for determining the reasonableness of the verification. Generally, the verification report should include the following content:
Basic document information (author, log, version, etc.)
Verification object (verification target)
Introduction to functional points
Verification plan
Breakdown of test points
Test cases
Test environment
Result analysis
Defect analysis
Verification conclusion
The following content provides further explanation of the list, with specific examples available innutshell_cache_report_demo.pdf
1. Basic Information
Including author, log, version, date, etc.
2. Verification object (verification target)
A necessary introduction to your verification object, which may include its structure, basic functions, interface information, etc.
3. Introduction to functional points
By reading the design documents or source code, you need to summarize the target functions of the DUT and break them down into various functional points.
4. Verification plan
Including your planned verification process and verification framework. Additionally, you should explain how each part of your framework works together.
5. Breakdown of test points
Proposed testing methods for the functional points. Specifically, it can include what signal output should be observed under certain signal inputs.
6. Test cases
The specific implementation of the test points. A test case can include multiple test points.
7. Test environment
Including hardware information, software version information, etc.
8. Result analysis
Result analysis generally refers to coverage analysis. Typically, two types of coverage should be considered:
1. Line Coverage: How many RTL lines of code are executed in the test cases. Generally, we require line coverage to be above 98%. 2. Functional Coverage:Determine whether the extracted functional points are covered and correctly triggered based on the relevant signals. We generally require test cases to cover each functional point.
9. Defect analysis
Analyze the defects present in the DUT. This can include the specification and detail of the design documents, the correctness of the DUT functions (whether there are bugs), and whether the DUT functions can be triggered.
10. Verification conclusion
The final conclusion drawn after completing the chip verification process, summarizing the above content.
4 - Verification Framework
mlvp is a Python-based hardware verification framework that helps users establish hardware
mlvp is a hardware verification framework written in Python. It relies on a multi-language conversion tool called picker , which converts Verilog hardware design code into a Python package, allowing users to drive and verify hardware designs using Python.
It incorporates some concepts from the UVM verification methodology to ensure the standardization and reusability of the verification environment. The entire setup of the verification environment has been redesigned to better align with software development practices, making it easier for software developers to get started with hardware verification.
4.1 - Quick Start
Installation
toffee
Toffee is a Python-based hardware verification framework designed to help users build hardware verification environments more conveniently and systematically using Python. It leverages the multi-language conversion tool picker, which converts Verilog code of hardware designs into Python Packages, enabling users to drive and verify hardware designs in Python.
Toffee requires the following dependencies:
Python 3.6.8+
Picker 0.9.0+
Once these dependencies are installed, you can install Toffee via pip:
pip install pytoffee
Or install the latest version of Toffee with the following command:
git clone https://github.com/XS-MLVP/toffee.git
cd toffee
pip install .
toffee-test
toffee-test is a pytest plugin that provides testing support for the toffee framework. It includes identifying test functions as toffee test case objects, making them recognizable and executable by the toffee framework, offering resource management for test cases and providing test report generationto assist users in writing test cases for toffee
First, use picker to convert it into a Python package, and then use mlvp to set up the verification environment. After installing the dependencies, run the following command in the example/adder directory to complete the conversion:
make dut
To verify the adder’s functionality, we will use mlvp to set up a verification environment.
First, we create a driver method for the adder interface using Bundle to describe the interface and Agent to define the driving methods, as shown below:
We use the driver_method decorator to mark the exec_add method, which drives the adder. Each time the method is called, it assigns the input signals a, b, and cin to the adder’s input ports, then reads the output signals sum and cout after the next clock cycle and returns them.The Bundle describes the interface the Agent needs to drive. It provides connection methods to the DUT’s input and output ports, allowing the Agent to drive any DUT with the same interface.Next, we create a reference model to verify the correctness of the adder’s output. In mlvp, we use the Model class for this, as shown below:
In the reference model, we define the exec_add method, which shares the same input parameters as the exec_add method in the Agent. The method calculates the expected output for the adder. We use the driver_hook decorator to associate this method with the Agent’s exec_add method.
Next, we create a top-level test environment to link the driving methods and the reference model, as shown below:
At this point, the verification environment is set up. toffee will automatically drive the reference model, collect results, and compare them with the adder’s output.
After that, we can now write several test cases to verify the adder’s functionality by toffee-test, as shown below:
You can run the example in the example/adder directory with the following command:
make run
After running, the report will be automatically generated in the reports directory.
4.2 - Writing a Standardized Verification Environment
Overview
The main task of writing verification code can be broadly divided into two parts: building the verification environment and writing test cases .Building the verification environment aims to encapsulate the Design Under Test (DUT) so that the verification engineer does not have to deal with complex interface signals when driving the DUT, but can instead directly use the high-level interfaces provided by the verification environment. If a reference model needs to be written, it should also be part of the verification environment.Writing test cases involves using the interfaces provided by the verification environment to write individual test cases for functional verification of the DUT.
Building the verification environment can be quite challenging, especially when the DUT is highly complex with numerous interface signals. In such cases, without a unified standard, constructing the verification environment can become chaotic, making it difficult for one person’s verification environment to be maintained by others. Additionally, when new verification tasks overlap with existing ones, it can be difficult to reuse the previous verification environment due to the lack of standardization.
This section will introduce the characteristics that a standardized verification environment should have, which will help in understanding the process of building the verification environment in mlvp.
Non-Reusable Verification Code
Take a simple adder as an example, which has two input ports, io_a and io_b, and one output port, io_sum. If we do not consider the possibility of reusing the verification code for other tasks, we might write the following driving code:
In the above code, we wrote an exec_add function, which essentially encapsulates the addition operation of the adder at a high level. With the exec_add function, we no longer need to worry about how to assign values to the interface signals of the adder or how to drive the adder and retrieve its output. We simply need to call the exec_add function to drive the adder and complete an addition operation.
However, this driving function has a major drawback—it directly uses the DUT’s interface signals to interact with the DUT, meaning that this driving function can only be used for this specific adder.
Unlike software testing, in hardware verification, we frequently encounter scenarios where the interface structures are identical. Suppose we have another adder with the same functionality, but its interface signals are named io_a_0, io_b_0, and io_sum_0. In this case, the original driving function would fail and could not be reused. To drive this new adder, we would have to rewrite a new driving function.
If writing a driving function for an adder is already this problematic, imagine the difficulty when dealing with a DUT with complex interfaces. After putting in a lot of effort to write the driving code for such a DUT, we might later realize that the code needs to be migrated to a similar structure with some changes in the interface, leading to a significant amount of rework. Issues such as interface name changes, missing or additional signals, or unused references in the driving code would emerge.
The root cause of these issues lies in directly operating the DUT’s interface signals in the verification code. As illustrated in the diagram below, this approach is problematic:
To solve the above problems, we need to decouple the verification code from the DUT, so that the verification code no longer directly manipulates the DUT’s interface signals. Instead, it interacts with the DUT through an intermediate layer. This intermediate layer is a user-defined interface structure, referred to as a Bundle in mlvp, and we will use Bundle to represent this intermediate layer throughout the document.Using the adder as an example, we can define a Bundle structure that includes the signals a, b, and sum, and let the test code interact directly with this Bundle:
In this case, the exec_add function does not directly manipulate the DUT’s interface signals, and it does not even need to know the names of the DUT’s interface signals. It interacts directly with the signals defined in the Bundle.How do we connect the signals in the Bundle to the DUT’s pins? This can be done by simply specifying how each signal in the Bundle is connected to the DUT’s pins. For example:
In this way, regardless of how the DUT’s interface changes, as long as the structure remains the same, we can use the original driving code to operate the DUT, with only the connection process needing adjustment. The relationship between the verification code and the DUT now looks like this:
In mlvp, we provide a simple way to define Bundles and a variety of connection methods to make defining and connecting the intermediate layer easy. Additionally, Bundles offer many practical features to help verification engineers interact with interface signals more effectively.
Categorizing DUT Interfaces for Driving
We now know that a Bundle must be defined to decouple the test code from the DUT. However, if the DUT’s interface signals are too complex, we might face a new issue—only this particular DUT can be connected to the Bundle. This is because we would be defining a Bundle structure that includes all the DUT’s pins, meaning only a DUT with an identical interface could be connected to this Bundle, which is too restrictive.In such cases, the intermediate layer loses its purpose. However, we often observe that a DUT’s interface structure is logically organized and usually composed of several independent sub-interfaces. For example, the dual-port stack mentioned here has two sub-interfaces with identical structures. Instead of covering the entire dual-port stack interface in a single Bundle, we can split it into two Bundles, each corresponding to one sub-interface.Moreover, for the dual-port stack, the two sub-interfaces have identical structures, so we can use the same Bundle to describe both sub-interfaces without redefining it. Since both share the same Bundle, the driving code written for this Bundle is fully reusable! This is the essence of reusability in verification environments.In summary, for every DUT, we should divide its interface signals into several independent sub-interfaces, each with its own function, and then define a Bundle for each sub-interface. The driving code for each Bundle should then be written.
At this point, the relationship between the verification code and the DUT looks like this:
Now, our approach to building the verification environment becomes clear: we write high-level abstractions for each independent sub-interface.
Structure of Independent Interface Drivers
We write high-level abstractions for each Bundle, and these pieces of code are independent and highly reusable. If we separate the interaction logic between the high-level operations and place it in the test cases, then a combination of multiple Test Code + Bundle units will form the entire driving environment for the DUT.We can assign a name to each Test Code + Bundle combination. In mlvp, this structure is called an Agent. An Agent is independent of the DUT and handles all interactions with a specific interface.
The relationship between the verification code and the DUT now looks like this:
Thus, the process of building the driving environment is essentially the process of writing one Agent after another. However, we have not yet discussed how to write a standardized Agent. If everyone writes Agents differently, the verification environment will still become difficult to manage.
Writing a Standardized “Agent”
To understand how to write a standardized Agent, we first need to grasp the main functions an Agent is supposed to accomplish. As mentioned earlier, an Agent implements all the interactions with a specific class of interfaces and provides high-level abstraction.
Let’s explore the interactions between the verification code and the interface. Assuming that the verification code has the ability to read input ports, we can categorize the interactions based on whether the verification code actively initiates communication or passively receives data, as follows:
Verification Code Actively Initiates
Actively reads the value of input/output ports
Actively assigns values to input ports
Verification Code Passively Receives
Passively receives the values of output/input ports
These two types of operations cover all interactions between the verification code and the interface, so an Agent must support both.
Interactions Actively Initiated by the Verification Code
Let’s first consider the two types of interactions actively initiated by the verification code. To encapsulate these interactions at a high level, the Agent must have two capabilities:
The driver should be able to convert high-level semantic information into assignments to interface signals.
It should convert interface signals into high-level semantic information and return this to the initiator.
There are various ways to implement these interactions. However, since mlvp is a verification framework based on a software testing language, and we want to keep the verification code as simple as possible, mlvp standardizes the use of functions to carry out these interactions.
Because functions are the most basic abstraction unit in programming, their input parameters can directly represent high-level semantic information and be passed to the function body. Within the function body, assignments and reading operations can handle the translation between semantic information and interface signals. Finally, the return value can be used to pass the converted interface signal back to the initiator as high-level semantic information.
In mlvp, such functions that facilitate interactions actively initiated by the verification code are called driver methods . In mlvp, we use the driver_method decorator to mark these functions.
Interactions Passively Received by the Verification Code
Next, let’s look at interactions passively received by the verification code. These interactions occur when the interface sends output signals to the verification code upon meeting specific conditions, without the verification code actively initiating the process.
For example, the verification code might want to passively obtain output signals from the DUT after the DUT completes an operation and convert them into high-level semantic information. Alternatively, the verification code might want to passively retrieve output signals at every cycle and convert them.
Similar to the driver_method, mlvp also standardizes the use of functions to carry out this type of interaction. However, these functions have no input parameters and are not actively controlled by the verification code. When specific conditions are met, the function is triggered to read interface signals and convert them into high-level semantic information. This information is then stored for later use by the verification code.These functions in mlvp, which facilitate passively received interactions, are referred to as monitor methods . We use the monitor_method decorator in mlvp to mark such functions.
A Standardized “Agent” Structure
In summary, we use functions as carriers to facilitate all interactions between the verification code and the interface. These functions are categorized into two types: driver methods and monitor methods . These methods handle the interactions actively initiated and passively received by the verification code, respectively.Thus, writing an Agent essentially involves creating a series of driver methods and monitor methods. Once an Agent is created, simply providing the list of its internal driver and monitor methods will describe the entire functionality of the Agent.An Agent structure can be described using the following diagram:
At this point, we have completed the encapsulation of high-level operations on the DUT and established interaction between the verification code and the DUT through functions. Now, to verify the functional correctness of the DUT, we would write test cases that use the driver methods to drive the DUT through specific operations. Simultaneously, the monitor methods are automatically triggered to collect relevant information from the DUT.But how do we verify that the DUT’s functionality is correct?
After driving the DUT in the test case, the output information we obtain from the DUT comes in two forms: one is actively retrieved through the driver methods, and the other is collected through the monitor methods. Therefore, verifying the DUT’s functionality essentially involves checking whether this information matches the expected results.
How do we determine whether this information is as expected?
In one case, we already know what the DUT’s output should be or what conditions it should meet. In this situation, after obtaining the information in the test case, we can directly check it against our expectations.
In another case, we do not know the expected output of the DUT. In this scenario, we can create a Reference Model (RM) with the same functionality as the DUT. Whenever we send input to the DUT, we simultaneously send the same input to the reference model.
To verify the two types of output information, we can compare the DUT’s output with the reference model’s output, obtained at the same time, to ensure consistency.
These are the two methods of verifying the DUT’s correctness: direct comparison and reference model comparison .
How to Add a Reference Model
For direct comparison, the comparison logic can be written directly into the test case. However, if we use the reference model method, the test case might involve additional steps: sending information to both the DUT and the model simultaneously, collecting both DUT and model outputs, and writing logic for comparing passive signals from the DUT with the reference model. This can clutter the test case code and mix the reference model interaction logic with the test logic, making maintenance difficult.
We can observe that every call to a driver function represents an operation on the DUT, which also needs to be forwarded to the reference model. The reference model doesn’t need to know how the DUT’s interface is driven; it only needs to process high-level semantic information and update its internal state. Therefore, the reference model only needs to receive the high-level semantic information (i.e., the input parameters of the driver function).
Thus, the reference model only needs to define how to react when driver functions are called. The task of passing call information to the reference model can be handled by the framework. Similarly, comparing return values or monitor signals can also be automatically managed by the framework.
With this, test cases only need to focus on driving the DUT, while synchronization and comparison with the reference model will be automatically managed by the framework.
To achieve reference model synchronization, mlvp defines a set of reference model matching specifications. By following these specifications, you can automatically forward and compare data to the reference model. Additionally, mlvp provides the Env concept to package the entire verification environment. Once the reference model is implemented, it can be linked to the Env for automatic synchronization.
Conclusion
Thus, our verification environment becomes the following structure:
At this stage, building the verification environment becomes clear and standardized. For reuse, you simply select the appropriate Agent, connect it to the DUT, and package everything into an Env. To implement a reference model, you just follow the Env interface specification and implement the reference model logic.The test cases are separated from the verification environment. Once the environment is set up, the interfaces provided by each Agent can be used to write the driving logic for the test cases. The synchronization and comparison with the reference model will be automatically handled by the framework.
This is the idea behind constructing the verification environment in mlvp, which offers many features to help you build a standardized verification environment. It also provides test case management methods to make writing and managing test cases easier.
4.3 - Setting Up a Verification Environment
mlvp provides the methods and tools needed for the complete process of setting up a verification environment. This chapter will explain in detail how to use mlvp to build a complete verification environment.Before proceeding, please ensure you have read How to Write a Canonical Verification Environment and are familiar with the basic structure of mlvp’s canonical verification environment.
For a completely new verification task, following the environment setup steps, the process of building a verification environment can be divided into the following steps:
Partition the DUT interface based on logical functions and define Bundles.
Write an Agent for each Bundle, completing the high-level encapsulation of the Bundle.
Encapsulate multiple Agents into an Env, completing the high-level encapsulation of the entire DUT.
Write the reference model according to the interface specifications of the Env and bind it to the Env.
This chapter will introduce how to use mlvp tools to meet the requirements for setting up the environment in each step.
4.3.1 - How to Use an Asynchronous Environment
Starting the Event Loop
In the previously described verification environment, we designed a standardized setup. However, if we attempt to write it as a simple single-threaded program, we may encounter complex implementation issues.
For instance, consider having two driver methods that drive two different interfaces. Inside each driver method, we need to wait for several clock cycles of the DUT (Device Under Test), and both methods must run simultaneously. In a basic single-threaded program, running both driver methods concurrently can be quite challenging. Even if we force concurrency using multithreading, there is still no built-in mechanism to wait for the DUT to advance through multiple clock cycles. This limitation exists because the interfaces provided by Picker allow us to push the DUT forward by one cycle at a time but not to wait for it.
Moreover, in cases where multiple components of the environment need to run concurrently, we require an environment that supports asynchronous execution. mlvp uses Python’s coroutines to manage asynchronous programs. It builds an event loop on top of a single thread to manage multiple concurrently running coroutines. These coroutines can wait on each other and switch between tasks via the event loop.Before starting the event loop, we need to understand two keywords, async and await, to grasp how Python manages coroutines.By adding the async keyword before a function, we define it as a coroutine, for example:
asyncdefmy_coro():...
Inside the coroutine, we use the await keyword to run another coroutine and wait for it to complete, for example:
How do we start the event loop and run my_coro2? In mlvp , we use mlvp.run to start the event loop and run the asynchronous program:
importmlvpmlvp.run(my_coro2())
Since all environment components in mlvp need to run within the event loop, when starting the mlvp verification environment, you must first initiate the event loop via mlvp.run and then create the mlvp verification environment inside the loop.
Thus, the test environment should be set up as follows:
As mentioned earlier, if we need two driver methods to run simultaneously and each one to wait for several DUT clock cycles, asynchronous environments allow us to wait for specific events. However, Picker only provides the ability to push the DUT forward by one cycle and does not provide an event to wait on.
mlvp addresses this by creating a background clock to automatically push the DUT forward one cycle at a time. After each cycle, the background clock sends a clock signal to other coroutines, allowing them to resume execution. The actual clock cycles of the DUT are driven by the background clock, while other coroutines only need to wait for the clock signal.In mlvp , the background clock is created using start_clock:
Simply call start_clock within the event loop to create the background clock. It requires a DUT object to drive the DUT’s execution and bind the clock signal to the DUT and its pins.In other coroutines, you can use ClockCycles to wait for the clock signal. The ClockCycles parameter can be the DUT itself or any of its pins. For example:
In my_coro, ClockCycles is used to wait for 10 clock cycles of the DUT. After 10 cycles, my_coro continues executing and prints “10 cycles passed.”mlvp provides several methods for waiting on clock signals, such as:
ClockCycles: Wait for a specified number of DUT clock cycles.
Value: Wait for a DUT pin to equal a specific value.
AllValid: Wait for all DUT pins to be valid simultaneously.
Condition: Wait for a condition to be met.
Change: Wait for a change in the value of a DUT pin.
RisingEdge: Wait for the rising edge of a DUT pin.
FallingEdge: Wait for the falling edge of a DUT pin.
For more methods of waiting on clock signals, refer to the API documentation.
4.3.2 - How to Use Bundle
Bundle serves as an intermediary layer in the mlvp verification environment, facilitating interaction between the Agent and the DUT while ensuring their decoupling. Additionally, Bundle helps define the hierarchy of DUT interface layers, making access to the DUT interface clearer and more convenient.
A Simple Definition of a Bundle
To define a Bundle, you need to create a new class that inherits from the Bundle class in mlvp. Here’s a simple example of defining a Bundle:
This Bundle defines a simple adder interface. In the AdderBundle class, we define five signals: a, b, sum, cin, and cout, which represent the input ports a and b, the output port sum, and the carry input and output ports cin and cout, respectively.After the definition, we can access these signals through an instance of the AdderBundle class, for example:
In the code above, we created an instance of a bundle and drove it, but we did not bind this bundle to any DUT, which means operations on this bundle cannot actually affect the DUT.
Using the bind method, we can bind a DUT to a bundle. For example, if we have a simple adder DUT whose interface names match those defined in the Bundle:
The bind function will automatically retrieve all interfaces from the DUT and bind those with the same names. Once bound, operations on the bundle will directly affect the DUT.However, if the interface names of the DUT differ from those defined in the Bundle, using bind directly will not bind them correctly. In the Bundle, we provide various binding methods to accommodate different binding needs.
Binding via a Dictionary
In the bind function, you can specify a mapping between the DUT’s interface names and the Bundle’s interface names by passing in a dictionary.
Suppose the interface names in the Bundle correspond to those in the DUT as follows:
You can see that the DUT’s interface names have an io_ prefix compared to those in the Bundle. In this case, you can create the Bundle using the from_prefix method, providing the prefix name to instruct the Bundle to bind using the prefix.
In some cases, the correspondence between the DUT’s interface names and the Bundle’s interface names may not be a simple prefix or dictionary relationship but instead follow more complex rules. For example, the mapping may be:
When using a regular expression, the Bundle attempts to match the DUT’s interface names with the regular expression. For successful matches, the Bundle reads all capture groups from the regular expression, concatenating them into a string. This string is then used to match against the Bundle’s interface names.
For example, in the code above, io_a_in matches the regular expression successfully, capturing a as the unique capture group. The name a matches the Bundle’s interface name a, so io_a_in is correctly bound to a.
Creating Sub-Bundles
Often, we may need a Bundle to contain one or more other Bundles. In this case, we can include already defined Bundles as sub-Bundles of the current Bundle.
In the code above, we define an ArithmeticBundle that contains its own signal selector. In addition, it includes an AdderBundle and a MultiplierBundle, which are named adder and multiplier, respectively.When accessing the sub-Bundles within the ArithmeticBundle, you can use the . operator:
Furthermore, when defining in this manner, binding the top-level Bundle will also bind the sub-Bundles to the DUT. The previously mentioned various binding methods can still be used when defining sub-Bundles.
It is important to note that the method for creating sub-Bundles matches signal names that have been processed by the previous Bundle’s creation method. For example, in the code above, if the top-level Bundle’s matching method is set to from_prefix('io_'), then the signal names matched within the AdderBundle will be those stripped of the io_ prefix.
Similarly, the dictionary matching method will pass the names transformed into the mapped names for matching with the sub-Bundle, while the regular expression matching method will pass the names captured by the regular expression for matching with the sub-Bundle.
Practical Operations in a Bundle
Signal Access and Assignment
Accessing Signal Values In a Bundle, signals can be accessed not only through the . operator but also through the [] operator.
When binding, you can pass the unconnected_signal_access parameter to control whether accessing unconnected signals is allowed. By default, it is True, meaning unconnected signals can be accessed. In this case, writing to the signal will not change it, and reading the signal will return None. When unconnected_signal_access is set to False, accessing unconnected signals will raise an exception.Assigning All Signals Simultaneously You can use the set_all method to change all input signals at once.
adder_bundle.set_all(0)
Changing Signal Assignment Mode The signal assignment mode is a concept in picker that controls how signals are assigned. Please refer to the picker documentation for more details.In a Bundle, you can change the assignment mode for the entire Bundle using the set_write_mode method.Additionally, there are shortcut methods: set_write_mode_as_imme, set_write_mode_as_rise, and set_write_mode_as_fall, which set the Bundle’s assignment mode to immediate, rising edge, and falling edge assignments, respectively.
Message Support
Default Message Type Assignmentmlvp supports assigning a default message type to a Bundle’s signals using the assign method with a dictionary.
adder_bundle.assign({'a':1,'b':2,'cin':0})
The Bundle will automatically assign the values from the dictionary to the corresponding signals. If you want to assign unspecified signals to a default value, use * to specify a default value:
adder_bundle.assign({'*':0,'a':1,})
Default Message Assignment for Sub-Bundles If you want to assign signals in sub-Bundles using default message types, this can be achieved in two ways. When the multilevel parameter in assign is set to True, the Bundle supports multi-level dictionary assignments.
Reading Default Message Types You can convert the current signal values in a Bundle into a dictionary using the as_dict method. It supports two formats: when multilevel is True, a multi-level dictionary is returned; when multilevel is False, a flattened dictionary is returned.
Custom Message Types
In custom message structures, rules can be defined to assign signals to a Bundle.
One approach is to implement the as_dict function in the custom message structure to convert it into a dictionary, which can then be assigned to the Bundle using the assign method.Another approach is to implement the __bundle_assign__ function in the custom message structure, which accepts a Bundle instance and assigns values to its signals. Once this is implemented, the assign method can be used to assign the message to the Bundle, and the Bundle will automatically call the __bundle_assign__ function for assignment.
When you need to convert the signal values in a Bundle into a custom message structure, implement a from_bundle class method in the custom message structure. This method accepts a Bundle instance and returns the custom message structure.
In addition to encapsulating DUT pins, the Bundle class also provides timing encapsulation based on arrays, which can be applied to simple timing scenarios. The Bundle class offers a process_requests(data_list) function that accepts an array as input. On the i-th clock cycle, data_list[i] will assign the corresponding data to the pins. The data_list can contain data in the form of a dict or a callable object (callable(cycle, bundle_ins)). For the dict type, special keys include:
__funcs__: func(cycle, self)# Callable object, can be an array of functions [f1, f2, ..]__condition_func__: func(cycle, self, cargs)# Conditional function, assignment occurs when this returns true, otherwise, the clock advances__condition_args__: cargs # Arguments for the conditional function__return_bundles__: bundle # Specifies which bundle data should be returned when this dict is processed. Can be list[bundle]
If the input dict contains __return_bundles__, the function will return the corresponding bundle values, such as {"data": x, "cycle": y}. For example, consider the Adder bundle where the result is expected after the third addition:
# The Adder is combinational logic but used here as sequential logicclassAdderBundle(Bundle):a,b,sum,cin,cout=Signals(5)# Define the pinsdef__init__(self,dut):super().__init__()# init clock# dut.InitClock("clock")self.bind(dut)# Bind to the DUTdefadd_list(data_list=[(1,2),(3,4),(5,6),(7,8)]):# Create the input dictdata=[]fori,(a,b)inenumerate(data_list):x={"a":a,"b":b,"*":0}# Build the dict for bundle assignmentifi>=2:x["__return_bundles__"]=self# Set the bundle to be returnedreturnself.process_requests(data)# Drive the clock, assign values, return results
In the Bundle, a step function is provided to conveniently synchronize with the clock signal of the DUT. When the Bundle is connected to any signal of the DUT, the step function automatically synchronizes with the DUT’s clock signal.The step function can be used to wait for clock cycles.
Signal Connectivity Rules Once the Bundle instance is defined, you can call the all_signals_rule method to get the connection rules for all signals, helping the user check if the connection rules are as expected.
adder_bundle.all_signals_rule()
Signal Connectivity Check The detect_connectivity method checks if a specific signal name can connect to any signal in the Bundle.
adder_bundle.detect_connectivity('io_a')
The detect_specific_connectivity method checks if a specific signal name can connect to a particular signal in the Bundle.
To check connectivity for signals in sub-Bundles, use the . operator to specify the sub-Bundle.
DUT Signal Connectivity Check
Unconnected Signal Check The detect_unconnected_signals method checks for any signals in the DUT that are not connected to any Bundle.
Bundle.detect_unconnected_signals(adder)
Duplicate Connection Check The detect_multiple_connections method checks for signals in the DUT that are connected to multiple Bundles.
Bundle.detect_multiple_connections(adder)
Other Practical Operations
Set Bundle Name You can set the name of a Bundle using the set_name method.
adder_bundle.set_name('adder')
Once the name is set, more intuitive prompt information is provided.
Get All Signals in the Bundle The all_signals method returns a generator containing all signals, including those in sub-Bundles.
In many cases, the interface of a DUT can be complex, making it tedious to manually write the Bundle definitions. However, since Bundle serves as an intermediate layer, providing an exact definition of signal names is essential. To address this, mlvp provides an automatic generation script that generates Bundle definitions from the DUT’s interface.The script bundle_code_gen.py can be found in the scripts folder of the mlvp repository. This script can automatically generate Bundle definitions by parsing a DUT instance and the specified binding rules.
It provides three functions:
These functions generate Bundle definitions based on a dictionary, prefix, or regular expression, respectively.To use, specify the Bundle name, DUT instance, and the corresponding generation rules to generate the Bundle definition. You can also use the max_width parameter to set the maximum width of the generated code.
The generated code can be copied directly into your project or used with minor modifications. It can also serve as a sub-Bundle definition for use in other Bundles.
4.3.3 - How to Write an Agent
An Agent in the mlvp verification environment provides a high-level encapsulation of signals within a class of Bundles, allowing the upper-level driver code to drive and monitor the signals in the Bundle without worrying about specific signal assignments.An Agent consists of driver methods and monitor methods , where the driver methods actively drive the signals in the Bundle, and the monitor methods passively observe the signals in the Bundle.
Initializing the Agent
To define an Agent, you need to create a new class that inherits from the Agent class in mlvp. Here’s a simple example of defining an Agent:
In the initialization of the AdderAgent class, you need to pass the Bundle that this Agent will drive and provide a clock synchronization function to the parent Agent class. This function will be used by the Agent to determine when to call the monitor methods. Generally, it can be set to bundle.step, which is the clock synchronization function in the Bundle, synchronized with the DUT’s clock.
Creating Driver Methods
In the Agent, the driver method is an asynchronous function used to actively drive the signals in the Bundle. The driver function needs to parse its input parameters and assign values to the signals in the Bundle based on the parsed results, which can span multiple clock cycles. If you need to obtain signal values from the Bundle, you should write the corresponding logic in the function and return the needed data through the function’s return value.Each driver method should be an asynchronous function and decorated with the @driver_method decorator so that the Agent can recognize it as a driver method.
Here’s a simple example of defining a driver method:
In the exec_add function, we assign the incoming parameters a, b, and cin to the corresponding signals in the Bundle. We then wait for one clock cycle. After the clock cycle ends, we return the values of sum and cout signals from the Bundle.During the development of the driver function, you can use all the synchronization methods for waiting for clock signals introduced in How to Use the Asynchronous Environment , such as ClockCycles, Value, etc.
Once created, you can call this driver method in your driving code like a regular function:
Functions marked with @driver_method have various features when called; this will be elaborated on when writing test cases. Additionally, these functions will handle matching against the reference model and automatically call back to return values for comparison; this will be discussed in the reference model section.
Creating Monitor Methods
The monitor method also needs to be an asynchronous function and should be decorated with the @monitor_method decorator so that the Agent can recognize it as a monitor method.
Here’s a simple example of defining a monitor method:
In the monitor_sum function, we use the sum signal in the Bundle as the object to monitor. When the value of the sum signal is greater than 0, we collect the default message type generated by the Bundle. The collected return value will be stored in the internal message queue.Once the monitor_method decorator is added, the monitor_sum function will be automatically called by the Agent, which will use the clock synchronization function provided during the Agent’s initialization to decide when to call the monitor method. By default, the Agent will call the monitor method once in each clock cycle. If the monitor method has a return value, it will be stored in the internal message queue. If the execution of a single call to the monitor method spans multiple clock cycles, the Agent will wait until the previous call to the monitor method has finished before calling it again.
If you write a monitor method like this:
This monitor method will add a message to the message queue in every cycle.
Retrieving Monitor Messages Since this monitor method is marked with @monitor_method, it will be automatically called by the Agent. If you try to directly call this function in your test case as follows, it will not execute as expected:
Instead, when called in the above manner, the monitor method will pop the earliest collected message from the message queue and return it. If the message queue is empty, this call will wait until there are messages in the queue before returning.
If you want to get the number of messages in the message queue, you can do so as follows:
By creating monitor methods, you can easily add a background monitoring task that observes the signal values in the Bundle and collects messages when certain conditions are met. Once a function is marked as a monitor method, the framework will also provide it with matching against the reference model and automatic collection for comparison; this will be detailed in the reference model section.
By writing multiple driver methods and monitor methods within the Agent, you complete the entire Agent implementation.
4.3.4 - How to Build an Env
Env is used in the mlvp verification environment to package the entire verification setup. It directly instantiates all the agents needed in the verification environment and is responsible for passing the required bundles to these agents.
Once the Env is created, the specification for writing reference models is also determined. Reference models written according to this specification can be directly attached to the Env, allowing it to handle automatic synchronization of the reference models.
Creating an Env
To define an Env, you need to create a new class that inherits from the Env class in mlvp. Here’s a simple example of defining an Env:
In this example, we define a DualPortStackEnv class that instantiates two identical StackAgent objects, each responsible for driving different Bundles.
You can choose to connect the Bundles outside the Env or within the Env itself, as long as you ensure that the correct Bundles are passed to the Agents.
At this point, if you do not need to write additional reference models, the entire verification environment setup is complete, and you can directly write test cases using the interfaces provided by the Env. For example:
Reference models written according to this specification can be directly attached to the Env, allowing it to automatically synchronize the reference models. This can be done as follows:
An Env can attach multiple reference models, all of which will be automatically synchronized by the Env.
The specific method for writing reference models will be detailed in the section on writing reference models.
4.3.5 - How to Write a Reference Model
A reference model is used to simulate the behavior of the design under verification, aiding in the validation process. In the mlvp verification environment, the reference model needs to follow the Env interface specifications so it can be attached to Env, allowing automatic synchronization by Env.
Two Ways to Implement a Reference Model
mlvp provides two methods for implementing a reference model, both of which can be attached to Env for automatic synchronization. Depending on the scenario, you can choose the most suitable method for your reference model implementation.These two methods are function call mode and independent execution flow mode . Below, we will introduce both concepts in detail.
Function Call Mode
Function call mode defines the reference model’s external interface as a series of functions, driving the reference model’s behavior by calling these functions. In this mode, data is passed to the reference model through input parameters, and the model’s output data is retrieved through return values. The internal state of the reference model is updated through the logic within the function body.
Here is a simple example of a reference model implemented in function call mode:
For instance, this is a simple reference model of an adder:
classAdderRefModel():defadd(self,a,b):returna+b
In this reference model, there is no need for any internal state. All functionalities are handled through a single external function interface.
Note that reference models written in function call mode can only be executed through external function calls and cannot output internal data passively. As a result, they cannot be matched with the monitoring methods in an Agent. Writing monitoring methods in the Agent is meaningless when using a reference model written in function call mode.
Independent Execution Flow Mode
Independent execution flow mode defines the reference model’s behavior as an independent execution flow. Instead of being controlled by external function calls, the reference model can actively fetch input data and output data. When external data is sent to the reference model, it does not respond immediately. Instead, it stores the data and waits for its logic to actively retrieve and process the data.
Here is a code snippet that demonstrates this mode using concepts provided by mlvp, though understanding these concepts in detail is not required at the moment.
In this example, two types of interfaces are defined in the constructor of the reference model: a driver interface (DriverPort) , represented by add_port, which receives external input data, and a monitoring interface (MonitorPort) , represented by sum_port, which outputs data to the external environment.Once these interfaces are defined, the reference model does not trigger a specific function when data is sent to it. Instead, the data is sent to the add_port driver interface. At the same time, external code cannot proactively retrieve output data from the reference model. The model will actively output the result data via the sum_port monitoring interface.How does the reference model utilize these interfaces? The reference model has a main function, which is its execution entry point. When the reference model is created, the main function is automatically called and runs continuously in the background. In the code above, the main function continuously waits for data from the add_port, computes the result, and outputs the result to the sum_port.The reference model actively requests data from the add_port, and if there is no data, it waits for new data. Once data arrives, it processes the data and proactively outputs the result to the sum_port. This execution flow operates independently and is not controlled by external function calls. When the reference model becomes more complex, with multiple driver and monitoring interfaces, the independent execution flow is particularly useful for handling interactions, especially when the interfaces have a specific call order.
How to Write a Function Call Mode Reference Model
Driver Function Matching
Suppose the following interface is defined in the Env:
If you want to write a reference model that corresponds to this interface, you need to define the behavior of the reference model for each driver function. For each driver function, write a corresponding function in the reference model that will be automatically called when the driver function is invoked.
To match a function in the reference model with a specific driver function, you should use the @driver_hook decorator to indicate that the function is a match for a driver function. Then, specify the corresponding Agent and driver function name in the decorator. Finally, ensure that the function parameters match those of the driver function, and the two will be linked.
At this point, the driver function is linked with the reference model function. When a driver function in the Env is called, the corresponding reference model function will be automatically invoked, and their return values will be compared.
mlvp also provides several matching methods to improve flexibility:
Specify the Driver Function Path
You can specify the driver function path using a “.”. For example:
Match Driver Function Name with Function Name If the reference model function name is the same as the driver function name, you can omit the driver_name parameter:
Instead of writing a separate driver_hook for each driver function in the Agent, you can use the @agent_hook decorator to match all the driver functions in an Agent at once.
In this example, the port_agent function will match all the driver functions in the port_agent Agent. When any driver function in the Agent is called, the port_agent function will be invoked automatically. Besides self, the port_agent function should take exactly two parameters: the first is the name of the driver function, and the second is the arguments passed to the driver function.When a driver function is called, the driver_name parameter will receive the name of the driver function, and the args parameter will receive the arguments passed during the call, represented as a dictionary. The port_agent function can then decide how to handle the driver function call based on driver_name and args and return the result. The framework will automatically compare the return value of this function with that of the driver function.Similar to driver functions, the @agent_hook decorator allows you to omit the agent_name parameter when the function name matches the Agent name.
Using Both agent_hook and driver_hook Once an agent_hook is defined, in theory, there is no need to define any driver_hook to match driver functions in the Agent. However, if special handling is needed for a specific driver function, a driver_hook can still be defined to match that driver function.When both agent_hook and driver_hook are present, the framework will first call the agent_hook function, followed by the driver_hook function. The result of the driver_hook function will be used for comparison.Once all the driver functions in the Env have corresponding driver_hook or agent_hook matches, the reference model can be attached to the Env using the attach method.
How to Write an Independent Execution Flow Reference Model
An independent execution flow reference model handles input and output through port interfaces, where it can actively request or send data. In mlvp, two types of interfaces are provided for this purpose: DriverPort and MonitorPort.Similarly, a series of DriverPort objects can be defined to match the driver functions in the Env, and a series of MonitorPort objects can be defined to match the monitor functions.When a driver function in the Env is called, the data from the call is sent to the DriverPort. The reference model will actively fetch this data, perform calculations, and output the result to the MonitorPort. When a monitor function in the Env is called, the comparator will automatically retrieve the data from the MonitorPort and compare it with the return value of the monitor function.
Driver Method Interface Matching
To receive all driver function calls from the Env, the reference model can define a corresponding DriverPort for each driver function. The DriverPort parameters agent_name and driver_name are used to match the driver functions in the Env.
Similar to driver_hook, you can also match the driver functions in the Env in the following ways:
# Specify the driver function path using "."self.push_port=DriverPort("port_agent.push")# If the variable name in the reference model matches the driver function name, you can omit the driver_name parameterself.push=DriverPort(agent_name="port_agent")# Match both the Agent name and driver function name using `__` to separate themself.port_agent__push=DriverPort()
Agent Interface Matching
You can also define an AgentPort to match all driver functions in an Agent. Unlike agent_hook, once an AgentPort is defined, no DriverPort can be defined for any driver function in that Agent. All driver function calls will be sent to the AgentPort.
Similarly, when the variable name matches the Agent name, you can omit the agent_name parameter:
self.port_agent=AgentPort()
Monitor Method Interface Matching
To match the monitor functions in the Env, the reference model needs to define a corresponding MonitorPort for each monitor function. The definition method is the same as for DriverPort.
self.monitor_port=MonitorPort(agent_name="port_agent",monitor_name="monitor")# Specify the monitor function path using "."self.monitor_port=MonitorPort("port_agent.monitor")# If the variable name in the reference model matches the monitor function name, you can omit the monitor_name parameterself.monitor=MonitorPort(agent_name="port_agent")# Match both the Agent name and monitor function name using `__` to separate themself.port_agent__monitor=MonitorPort()
The data sent to the MonitorPort will automatically be compared with the return value of the corresponding monitor function in the Env.Once all DriverPort, AgentPort, and MonitorPort definitions in the reference model successfully match the interfaces in the Env, the reference model can be attached to the Env using the attach method.
4.4 - Writing Test Cases
Writing test cases requires utilizing the interfaces defined in the verification environment. However, it is often necessary to drive multiple interfaces simultaneously in the test case, and there are often different synchronization needs with reference simulations. This section will provide a detailed explanation of how to better use the interfaces in the verification environment for writing test cases.
Once the verification environment is set up, test cases are written to verify whether the design functions as expected. Two important aspects of hardware verification are functional coverage and line coverage . Functional coverage means whether the test cases cover all the functions of the design, while line coverage means whether the test cases trigger all lines of the design’s code. In mlvp, not only is support provided for both types of coverage, but after each run, the tool automatically calculates the results for both and generates a verification report. mlvp uses pytest to manage test cases, which provides powerful test case management capabilities.
In this section, we will cover how to write test cases to take advantage of the powerful features provided by mlvp in the following areas:
How to use test environment interfaces for driving
How to manage test cases with pytest
How to add functional test points
4.4.1 - How to Drive Using Test Environment Interfaces
How to Simultaneously Call Multiple Driver Functions
Once the verification environment is set up, you can write test cases using the interfaces provided by the verification environment. However, it is often difficult to call two driver functions simultaneously using conventional serial code. This becomes especially important when multiple interfaces need to be driven at the same time, and mlvp provides a simple way to handle such scenarios.
Simultaneously Calling Multiple Driver Functions of Different Categories
For example, suppose the current Env structure is as follows:
We want to call the push functions of both port1_agent and port2_agent simultaneously in a test case, to drive both interfaces at the same time.In mlvp, this can be achieved using the Executor.
We use async with to create an Executor object and establish an execution block. By directly calling exec, you can add the driver functions that need to be executed. When the Executor object exits the scope, all added driver functions will be executed simultaneously. The Executor will automatically wait for all the driver functions to complete.If you need to retrieve the return values of the driver functions, you can use the get_results method. get_results returns a dictionary where the keys are the names of the driver functions, and the values are lists containing the return values of the respective driver functions.
Multiple Calls to the Same Driver Function
If the same driver function is called multiple times in the execution block, Executor will automatically serialize these calls.
In the code above, port1_agent.push will be called 5 times, and port2_agent.push will be called once. Since port1_agent.push is the same driver function, Executor will automatically serialize these 5 calls, and the return values will be stored sequentially in the result list. Meanwhile, port2_agent.push will execute in parallel with the serialized port1_agent.push calls.
In this process, we created a scheduling process like this:
Executor automatically created two scheduling groups based on the function names of the driver functions, and the driver functions were added to their respective groups in the order they were called. Inside the scheduling group, the driver functions are executed sequentially. Across groups, driver functions are executed in parallel.The default name for the scheduling group is the driver function’s path name, separated by periods (.).Using the sche_group parameter, you can manually specify which scheduling group a driver function call belongs to. For example:
In this case, port1_agent.push and port2_agent.push will be added sequentially to the same scheduling group, group1, and they will execute in series. In the dictionary returned by get_results, group1 will be the key, and its value will be a list of the return values for all the driver functions in group1.
Adding Custom Functions to the Executor
If we call driver functions or other functions from a custom function and wish to schedule the custom function through the Executor, we can add the custom function in the same way as we add driver functions.
Here, multi_push_port1 will be added to the Executor, creating a scheduling group named multi_push_port1 and adding two calls to it. This will execute in parallel with the port2_agent.push group.We can also use Executor within custom functions, or call other custom functions, allowing us to create arbitrarily complex scheduling scenarios with Executor.
Example Scenarios:
Scenario 1 :
The environment interface is as follows:
The send function in both agents needs to be called 5 times in parallel, sending the result of the previous call each time, with the first call sending 0. The two function calls are independent of each other.
task1 and task2 need to be executed in parallel, with synchronization after each call. Both need to be called 5 times, and long_task needs to execute in parallel with task1 and task2.
The Executor will wait for all driver functions to complete before exiting, but sometimes it’s unnecessary to wait for all functions. You can set the exit condition using the exit parameter when creating the Executor.The exit parameter can be set to all, any, or none, which correspond to exiting after all groups finish, after any group finishes, or immediately without waiting.
In this code, the send_forever function runs in an infinite loop. By setting exit="any", the Executor will exit after env.agent2.send completes, without waiting for send_forever.If needed later, you can wait for all tasks to complete by calling exec.wait_all.
4.4.2 - How to Use Pytest to Manage Test Cases
Writing Test Cases
In mlvp, test cases are managed using pytest. pytest is a powerful Python testing framework. If you are not familiar with pytest, you can refer to the official pytest documentation .
Writing Your First Test Case
First, we need to create a test case file, for example, test_adder.py. The file should start with test_ or end with _test.py so that pytest can recognize it. Then we can write our first test case in it.
pytest cannot directly run coroutine test cases, so we need to call mlvp.run in the test case to execute the asynchronous test case.Once the test case is written, we can run pytest in the terminal.
pytest
pytest will look for all files in the current directory that start with test_ or end with _test.py and will run the functions that start with test_, treating each function as a test case.
Running Coroutine Test Cases
To enable pytest to run coroutine test cases directly, mlvp provides the mlvp_async marker to mark asynchronous test cases.
As shown, we simply need to add the @pytest.mark.mlvp_async marker to the test case function, and pytest will be able to run coroutine test cases directly.
Generating Test Reports
When running pytest, mlvp will automatically collect the execution results of test cases, tally coverage information, and generate a validation report. To generate this report, you need to add the --mlvp-report parameter when calling pytest.
pytest --mlvp-report
By default, mlvp will generate a default report name for each run and place the report in the reports directory. You can specify the report storage directory using the --report-dir parameter and the report name using the --report-name parameter.However, at this point, since mlvp cannot determine the coverage file name, the report cannot display coverage information. If you want coverage information to be shown in the report, you need to pass the functional coverage group and line coverage file name in each test case.
In the code above, when creating the DUT, we pass the names of the waveform file and coverage file, allowing the DUT to generate a coverage file with the specified name during execution. Then we define a coverage group to collect the functional coverage information of the DUT, which will be explained in detail in the next document.
Next, we call the DUT’s Finish method to stop recording the waveform file. Finally, we use the set_func_coverage and set_line_coverage functions to set the functional coverage group and line coverage file information.When we run pytest again, mlvp will automatically collect the coverage information and display it in the report.Managing Resources with mlvp
However, the above process is quite cumbersome, and to ensure that file names do not conflict between each test case, we need to pass different file names in each test case. Additionally, if a test case encounters an exception, it will not complete, resulting in the coverage file not being generated.
Therefore, mlvp provides the mlvp_pre_request Fixture to manage resources and simplify the writing of test cases.
Fixtures are a concept in pytest. In the code above, we define a fixture named my_request. If any other test case has an output parameter containing the my_request parameter, pytest will automatically call the my_request fixture and pass its return value to the test case.In the code above, we defined a custom fixture my_request and used it in the test case, which means that the resource management will be handled within the fixture, allowing the test case to focus solely on the test logic. The my_request fixture must use mlvp’s provided mlvp_pre_request fixture as a parameter for resource management. The mlvp_pre_request fixture provides a series of methods for managing resources.By using add_cov_groups, the coverage group will be automatically included in the report.
Using create_dut, a DUT instance is created, and mlvp will automatically manage the generation of the DUT’s waveform and coverage files, ensuring that file names do not conflict.In my_request, you can customize the return values passed to the test case. If you want any test case to access the fixture, you can define the fixture in the conftest.py file.
Thus, we have achieved the separation of test case resource management and logic writing, eliminating the need to manually manage resource creation and release in each test case.
4.4.3 - How to Write Test Points
Test Points in Verification
In mlvp, a test point (Cover Point) refers to the smallest unit of verification for a specific function of the design, while a test group (Cover Group) is a collection of related test points.
To define a test point, you need to specify the name of the test point and its trigger condition. For example, you can define a test point such as, “When the result of the adder operation is non-zero, the result is correct.” In this case, the trigger condition for the test point could be “the sum signal of the adder is non-zero.”
When the trigger condition of the test point is met, the test point is triggered. At this moment, the verification report will record the triggering of the test point and increase the functional coverage of the verification. When all test points are triggered, the functional coverage of the verification reaches 100%.
How to Write Test Points
Before writing test points, you first need to create a test group and specify the name of the test group:
Next, you need to add test points to this test group:
# import mlvp.funcov as fc# g.add_watch_point(adder.io_cout, {"io_cout is 0": fc.Eq(0)}, name="Cout is 0")
TBD (To Be Determined)
4.5 - Starting a New Verification Task
With mlvp, you can now set up a complete verification environment and conveniently write test cases. However, in real-world scenarios, it can be challenging to understand how to get started and ultimately complete a verification task. After writing code, common issues may include difficulties in correctly partitioning the Bundle, misunderstanding the high-level semantic encapsulation of the Agent, and not knowing what to do after setting up the environment.
In this section, we will introduce how to complete a new verification task from scratch and how to use mlvp effectively to accomplish it.
1. Understanding the Design Under Test (DUT)
When you first encounter a new design, you may face dozens or even hundreds of input and output signals, which can be overwhelming. At this point, you must trust that these signals are defined by the design engineers, and by understanding the functionality of the design, you can infer the meaning of these signals.
If the design team provides documentation, you can read it to understand the functionality of the design and map the functions to the input and output signals. You should also gain a clear understanding of the signal timing and how to use these signals to drive the design. Generally, you will also need to review the design’s source code to uncover more detailed timing issues.
Once you have a basic understanding of the DUT’s functionality and how to drive its interface, you can start building the verification environment.
2. Partitioning the Bundle
The first step in setting up the environment is to logically partition the interface into several sets, with each set of interfaces considered as a Bundle. Each Bundle should be driven by an independent Agent.
However, in practice, interfaces may appear like this:
This raises the question: should B1.1 and B1.2 each have their own Agent, or should a single Agent be created for Bundle 1?
The answer depends on the logic of the interface. If a request requires simultaneous operations on both B1.1 and B1.2, then you should create one Agent for Bundle 1 rather than creating separate Agents for B1.1 and B1.2.
That said, creating individual Agents for B1.1 and B1.2 is also feasible. This increases the granularity of the Agent but sacrifices operational continuity, making the upper-level code and reference model more complex. Therefore, the appropriate granularity depends on the specific use case. In the end, all Agents combined should cover the entire DUT Bundle interface.
In practice, for convenience in connecting the DUT, you can define a DUT Bundle that connects all interfaces to this Bundle at once, and then the Env can distribute the sub-Bundles to their respective Agents.
3. Writing the Agent
After partitioning the Bundle, you can start writing the Agents to drive them. You need to write an Agent for each Bundle.
First, you can begin by writing the driver methods, which are high-level semantic encapsulations of the Bundle. These high-level semantic details should carry all the information necessary to drive the Bundle. If a signal in the Bundle requires a value but the method parameters don’t provide the corresponding information, then the encapsulation is incomplete. Avoid assuming any signal values within the driver methods; otherwise, the DUT’s output will be affected by these assumptions, potentially causing discrepancies between the reference model and the DUT.
This high-level encapsulation also defines the functionality of the reference model, which interacts directly with the high-level semantic information, not with the low-level signals.
If the reference model is written using function-call mode, the DUT’s outputs should be returned through function return values. If the reference model uses a separate execution flow, you should write monitoring methods that convert the DUT’s outputs into high-level semantic information and output them via the monitoring methods.
4. Encapsulating into Env
Once all the Agents are written or selected from existing Agents, you can encapsulate them into the Env.Env encapsulates the entire verification environment and defines the writing conventions for the reference model.
5. Writing the Reference Model
Writing the reference model doesn’t need to wait until the Env is complete—it can be done alongside the Agent development, with some driving code written in real-time to verify correctness. Of course, if the Agent is well-structured, writing the reference model after the complete Env is created is also feasible.
The most important part of the reference model is choosing the appropriate mode—both function-call mode and separate execution flow mode are viable, but the selection depends on the specific use case.
6. Identifying Functional and Test Points
After writing the Env and reference model, you cannot immediately start writing test cases because there is no direction yet for writing them. Blindly writing test cases won’t ensure complete verification of the design.
First, you need to list the functional and test points. Functional points refer to all the functionalities supported by the design. For example, for an arithmetic logic unit (ALU), functional points could be “supports addition” or “supports multiplication.” Each functional point should correspond to one or more test points, which break the function into different test scenarios to verify whether the functional point is correct. For example, for the “supports addition” functional point, test points could include “addition is correct when both inputs are positive.”
7. Writing Test Cases
Once the list of functional and test points is determined, you can start writing test cases. Each test case should cover one or more test points to verify whether the functional point is correct. All test cases combined should cover all test points (100% functional coverage) and all lines of code (100% line coverage), ensuring verification completeness.
How can you ensure verification correctness? If the reference model comparison method is used, mlvp will automatically throw an exception when a mismatch occurs, causing the test case to fail. If a direct comparison method is used, you should use assert in the test case to write comparison code. When the comparison fails, the test case will also fail. When all test cases pass, the functionality is confirmed as correct.When writing, use the interfaces provided by Env to drive the DUT. If interaction between multiple driver methods is needed, you can use the Executor to encapsulate higher-level functions. In other words, interactions at the driver method level should be handled during test case development.
8. Writing the Verification Report
Once 100% line and functional coverage is achieved, the verification is complete. A verification report should be written to summarize the results. If issues are found in the DUT, the report should provide detailed descriptions of the causes. If 100% coverage is not achieved, the report should explain why. The format of the report should follow the company’s internal standards.
4.6 - API Documentation
4.6.1 - Bundle API
5 - Advanced Case Studies
Complex case studies completed using the open verification platform.
Using TileLink Protocol for L2 Cache Driven by C++
6 - Multi-language Support
The Open Verification Platform supports multiple languages.
6.1 - Using C++
Encapsulate the DUT hardware runtime environment with C++ and compile it into a dynamic library.
Principle Introduction
Basic Library
In this chapter, we will introduce how to use Picker to compile RTL code into a C++ class and compile it into a dynamic library.
First, the Picker tool parses the RTL code, creates a new module based on the specified Top Module, encapsulates the module’s input and output ports, and exports DPI/API to operate the input ports and read the output ports.
The tool determines the module to be encapsulated by specifying the file and Module Name of the Top Module. At this point, you can understand Top as the main function in software programming.
Next, the Picker tool uses the specified simulator to compile the RTL code and generate a DPI library file. This library file contains the logic required to simulate running the RTL code (i.e., the hardware simulator).
For VCS, this library file is a .so (dynamic library) file, and for Verilator, it is a .a (static library) file.DPI stands for Direct Programming Interface,which can be understood as an API specification.
Then, the Picker tool renders the base class defined in the source code according to the configuration parameters, generates a base class (wrapper) for interfacing with the simulator and hides simulator details, and links the base class with the DPI library file to generate a UT dynamic library file.
At this point, the UT library file uses the unified API provided by the Picker tool template. Compared with the simulator-specific API in the DPI library file, the UT library file provides a unified API interface for the hardware simulator generated by the simulator.
The generated UT library file is common across different languages! Unless otherwise specified, other high-level languages will operate the hardware simulator by calling the UT dynamic library.
Finally, based on the configuration parameters and parsed RTL code, the Picker tool generates a C++ class source code. This source code is the definition (.hpp) and implementation (.cpp) of the RTL hardware module in the software. Instantiating this class is equivalent to creating a hardware module.
This class inherits from the base class and implements the pure virtual functions in the base class to instantiate the hardware in software.There are two reasons for not encapsulating this class implementation into the dynamic library:
Since the UT library file needs to be common across different languages, and different languages have different ways to implement classes, for universality, the class implementation is not encapsulated into the dynamic library.
To facilitate debugging, enhance code readability, and make it easier for users to repackage and modify.
Generating Executable Files
In this chapter, we will introduce how to write test cases and generate executable files based on the basic library generated in the previous chapter (including dynamic libraries, class declarations, and definitions).
First, users need to write test cases, which means instantiating the class generated in the previous chapter and calling the methods in the class to operate the hardware module.
Details can be found in [Random Number Generator Verification - Configure Test Code](docs/quick-start/examples/rmg/#Configure Test Code) for instantiation and initialization process.
Second, users need to apply different linking parameters to generate executable files based on the different simulators applied in the basic library. The corresponding parameters are defined in template/cpp/cmake/*.cmake.
Finally, according to the configured linking parameters, the compiler will link the basic library and generate an executable file.
Taking Adder Verification as an example, picker_out_adder/cpp/cmake/*.cmake is a copy of the template described in item 2 above.
vcs.cmake defines the linking parameters of the basic library generated using the VCS simulator, and verilator.cmake defines the linking parameters of the basic library generated using the Verilator simulator.
Usage
The parameter --language cpp or -l cpp is used to specify the generation of the C++ basic library.
The parameter -e is used to generate an executable file containing an example project.
The parameter -v is used to retain intermediate files when generating the project.
In C++, the destructor of the DUT automatically calls dut.Finish(), so you only need to delete dut after the test ends to perform post-processing (write waveform, coverage files, etc.).
Encapsulate the DUT hardware runtime environment with Java and package it as a jar file.
Currently, Picker supports C++/Python. Other languages such as Java, Golang, Javascript, Scala, etc., will be supported after the Python interface is stabilized.