This is the multi-page printable view of this section.
Click here to print.
Return to the regular view of this page.
Quick Start
How to use the open verification platform to participate in hardware verification.
This page will briefly introduce what verification is, as well as concepts used in the examples, such as DUT (Design Under Test) and RM (Reference Model).
Chip Verification
Chip verification is a crucial step to ensure the correctness and reliability of chip designs, including functional verification, formal verification, and physical verification. This material only covers functional verification, focusing on simulation-based chip functional verification. The processes and methods of chip functional verification have many similarities with software testing, such as unit testing, system testing, black-box testing, and white-box testing. They also share similar metrics, such as functional coverage and code coverage. In essence, apart from the different tools and programming languages used, their goals and processes are almost identical. Thus, software test engineers should be able to perform chip verification without considering the tools and programming languages. However, in practice, software testing and chip verification are two completely separate fields, primarily due to the different verification tools and languages, making it difficult for software test engineers to crossover. In chip verification, hardware description languages (e.g., Verilog or SystemVerilog) and specialized commercial tools for circuit simulation are commonly used. Hardware description languages differ from high-level software programming languages like C++/Python, featuring a unique “clock” characteristic, which poses a high learning curve for software engineers.
To bridge the gap between chip verification and traditional software testing, allowing more people to participate in chip verification, this project provides the following content:
Multi-language verification tools (Picker), allowing users to use their preferred programming language for chip verification.
Verification framework (MLVP), enabling functional verification without worrying about the clock.
Introduction to basic circuits and verification knowledge, helping software enthusiasts understand circuit characteristics more easily.
Basic learning materials for fundamental verification knowledge.
Real high-performance chip verification cases, allowing enthusiasts to participate in verification work remotely.
Basic Terms
DUT: Design Under Test, usually referring to the designed RTL code.
RM: Reference Model, a standard error-free model corresponding to the unit under test.
RTL: Register Transfer Level, typically referring to the Verilog or VHDL code corresponding to the chip design.
Coverage: The percentage of the test range relative to the entire requirement range. In chip verification, this typically includes line coverage, function coverage, and functional coverage.
DV: Design Verification, referring to the collaboration of design and verification.
Differential Testing (difftest): Selecting two (or more) functionally identical units under test, submitting the same test cases that meet the unit’s requirements to observe whether there are differences in the execution results.
The core tool used in this material is Picker(https://github.com/XS-MLVP/picker). Its purpose is to automatically provide high-level programming language interfaces (Python/C++) for RTL-written design modules. Based on this tool, verification personnel with a software development (testing) background can perform chip verification without learning hardware description languages like Verilog/VHDL.
System Requirements
Recommended operating system: Ubuntu 22.04 LTS
In the development and research of system architecture, Linux is the most commonly used platform, mainly because Linux has a rich set of software and tool resources. Due to its open-source nature, important tools and software (such as Verilator) can be easily developed for Linux. In this course, multi-language verification tools like Picker and Swig can run stably on Linux.
1 - Setting Up the Verification Environment
Install the necessary dependencies, download, build, and install the required tools.
Installing Dependencies
-
cmake ( >=3.11 )
-
gcc ( Supports C++20, at least GCC version 10, recommended 11 or higher )
-
python3 ( >=3.8 )
-
verilator ( ==4.218 )
-
verible-verilog-format ( >=0.0-3428-gcfcbb82b )
-
swig ( >=4.2.0 , for multi-language support )
Please ensure that the tools like verible-verilog-format
have been added to the environment variable $PATH
, so they can be called directly from the command line.
Source Code Download
Build and Install
The default installation path is /usr/local
, with binary files placed in /usr/local/bin
and template files in /usr/local/share/picker
.
If you need to change the installation directory, you can pass arguments to cmake by specifying ARGS, for example: make ARGS="-DCMAKE_INSTALL_PREFIX=your_install_dir"
The installation will automatically install the xspcomm
base library (https://github.com/XS-MLVP/xcomm), which is used to encapsulate the basic types of RTL
modules, located at /usr/local/lib/libxspcomm.so
. You may need to manually set the link directory parameters (-L) during compilation.
If support for languages such as Java is enabled, the corresponding xspcomm
multi-language packages will also be installed.
picker can also be compiled into a wheel file and installed via pip
To package picker into a wheel installation package, use the following command:
After compilation, the wheel file will be located in the dist directory. You can then install it via pip, for example:
After installation, execute the picker
command to except the flow output:
XDut Generate.
Convert DUT(*.v/*.sv) to C++ DUT libs.
Usage: ./build/bin/picker [OPTIONS] [SUBCOMMAND]
Options:
-h,--help Print this help message and exit
-v,--version Print version
--show_default_template_path
Print default template path
--show_xcom_lib_location_cpp
Print xspcomm lib and include location
--show_xcom_lib_location_java
Print xspcomm-java.jar location
--show_xcom_lib_location_scala
Print xspcomm-scala.jar location
--show_xcom_lib_location_python
Print python module xspcomm location
--show_xcom_lib_location_golang
Print golang module xspcomm location
--check check install location and supproted languages
Subcommands:
export Export RTL Projects Sources as Software libraries such as C++/Python
pack Pack UVM transaction as a UVM agent and Python class
Installation Test
picker currently has two subcommands: export
and pack
.
The export
subcommand is used to convert RTL designs into “libraries” corresponding to other high-level programming languages, which can be driven through software.
$picker export –help
Export RTL Projects Sources as Software libraries such as C++/Python
Usage: picker export [OPTIONS] file...
Positionals:
file TEXT ... REQUIRED DUT .v/.sv source file, contain the top module
Options:
-h,--help Print this help message and exit
--fs,--filelist TEXT ... DUT .v/.sv source files, contain the top module, split by comma.
Or use '*.txt' file with one RTL file path per line to specify the file list
--sim TEXT [verilator] vcs or verilator as simulator, default is verilator
--lang,--language TEXT:{python,cpp,java,scala,golang} [python]
Build example project, default is python, choose cpp, java or python
--sdir,--source_dir TEXT [/home/yaozhicheng/workspace/picker/template]
Template Files Dir, default is ${picker_install_path}/../picker/template
--tdir,--target_dir TEXT [./picker_out]
Codegen render files to target dir, default is ./picker_out
--sname,--source_module_name TEXT ...
Pick the module in DUT .v file, default is the last module in the -f marked file
--tname,--target_module_name TEXT
Set the module name and file name of target DUT, default is the same as source.
For example, -T top, will generate UTtop.cpp and UTtop.hpp with UTtop class
--internal TEXT Exported internal signal config file, default is empty, means no internal pin
-F,--frequency TEXT [100MHz]
Set the frequency of the **only VCS** DUT, default is 100MHz, use Hz, KHz, MHz, GHz as unit
-w,--wave_file_name TEXT Wave file name, emtpy mean don't dump wave
-c,--coverage Enable coverage, default is not selected as OFF
--cp_lib,--copy_xspcomm_lib BOOLEAN [1]
Copy xspcomm lib to generated DUT dir, default is true
-V,--vflag TEXT User defined simulator compile args, passthrough.
Eg: '-v -x-assign=fast -Wall --trace' || '-C vcs -cc -f filelist.f'
-C,--cflag TEXT User defined gcc/clang compile command, passthrough. Eg:'-O3 -std=c++17 -I./include'
--verbose Verbose mode
-e,--example Build example project, default is OFF
--autobuild BOOLEAN [1] Auto build the generated project, default is true
Static Multi-Module Support:
When generating the wrapper for dut_top.sv/v
, picker allows specifying multiple module names and their corresponding quantities using the --sname
parameter. For example, if there are modules A and B in the design files a.v
and b.v
respectively, and you need 2 instances of A and 3 instances of B in the generated DUT, and the combined module name is C (if not specified, the default name will be A_B). This can be achieved using the following command:
Environment Variables:
DUMPVARS_OPTION
: Sets the option parameter for $dumpvars
. For example, DUMPVARS_OPTION="+mda" picker ....
enables array waveform support in VCS.
SIMULATOR_FLAGS
: Parameters passed to the backend simulator. Refer to the documentation of the specific backend simulator for details.
CFLAGS
: Sets the -cflags
parameter for the backend simulator.
The pack
subcommand is used to convert UVM sequence_item
into other languages and then communicate through TLM (currently supports Python, other languages are under development).
$picker pack –help
Test Examples
After picker compilation, execute the following commands in the picker directory to test the examples:
bash example/Adder/release-verilator.sh --lang cpp
bash example/Adder/release-verilator.sh --lang python
# Default enable cpp and python
# for other languages support:make BUILD_XSPCOMM_SWIG=python,java,scala,golang
bash example/Adder/release-verilator.sh --lang java
bash example/Adder/release-verilator.sh --lang scala
bash example/Adder/release-verilator.sh --lang golang
bash example/RandomGenerator/release-verilator.sh --lang cpp
bash example/RandomGenerator/release-verilator.sh --lang python
bash example/RandomGenerator/release-verilator.sh --lang java
More Documents
For guidance on chip verification with picker, please refer to: https://open-verify.cc/mlvp/en/docs/
2 - Case 1: Adder
Demonstrates the principles and usage of the tool based on a simple adder verification. This adder is implemented using simple combinational logic.
RTL Source Code
In this case, we drive a 64-bit adder (combinational circuit) with the following source code:
This adder contains a 64-bit adder with inputs of two 64-bit numbers and a carry-in signal, outputting a 64-bit sum and a carry-out signal.
Testing Process
During the testing process, we will create a folder named Adder
, containing a file called Adder.v
. This file contains the above RTL source code.
Exporting RTL to Python Module
Navigate to the Adder
folder and execute the following command:
This command performs the following actions:
-
Uses Adder.v
as the top file, with Adder
as the top module, and generates a dynamic library using the Verilator simulator with Python as the target language.
-
Enables waveform output, with the target waveform file as Adder.fst
.
-
Includes files for driving the example project (-e), and does not automatically compile after code generation (-autobuild=false).
-
The final file output path is picker_out_adder
.
Some command-line parameters were not used in this command, and they will be introduced in later sections.
The output directory structure is as follows. Note that these are all intermediate files and cannot be used directly:
Navigate to the picker_out_adder
directory and execute the make
command to generate the final files.
Use the simulator invocation script defined by cmake/*.cmake
to compile Adder_top.sv
and related files into the libDPIAdder.so
dynamic library.Use the compilation script defined by CMakeLists.txt
to wrap libDPIAdder.so
into the libUTAdder.so
dynamic library through dut_base.cpp
. Both outputs from steps 1 and 2 are copied to the UT_Adder
directory.Generate the wrapper layer using the SWIG
tool with dut_base.hpp
and dut.hpp
header files, and finally build a Python module in the UT_Adder
directory.If the -e
parameter is included, the pre-defined example.py
is placed in the parent directory of the UT_Adder
directory as a sample code for calling this Python module.
The final directory structure is:
Setting Up Test Code
Replace the content in example.py
with the following Python test code.
Running the Test
In the picker_out_adder
directory, execute the python3 example.py
command to run the test. After the test is complete, we can see the output of the example project.
[...]
[test 114507] a=0x7adc43f36682cffe, b=0x30a718d8cf3cc3b1, cin=0x0 => sum: 0x12358823834579604399, cout: 0x0
[test 114508] a=0x3eb778d6097e3a72, b=0x1ce6af17b4e9128, cin=0x0 => sum: 0x4649372636395916186, cout: 0x0
[test 114509] a=0x42d6f3290b18d4e9, b=0x23e4926ef419b4aa, cin=0x1 => sum: 0x7402657300381600148, cout: 0x0
[test 114510] a=0x505046adecabcc, b=0x6d1d4998ed457b06, cin=0x0 => sum: 0x7885127708256118482, cout: 0x0
[test 114511] a=0x16bb10f22bd0af50, b=0x5813373e1759387, cin=0x1 => sum: 0x2034576336764682968, cout: 0x0
[test 114512] a=0xc46c9f4aa798106, b=0x4d8f52637f0417c4, cin=0x0 => sum: 0x6473392679370463434, cout: 0x0
[test 114513] a=0x3b5387ba95a7ac39, b=0x1a378f2d11b38412, cin=0x0 => sum: 0x6164045699187683403, cout: 0x0
Test Passed
3 - Case 2: Random Number Generator
Demonstrating the tool usage with a 16-bit LFSR random number generator, which includes a clock signal, sequential logic, and registers.
RTL Source Code
In this example, we drive a random number generator, with the source code as follows:
This random number generator contains a 16-bit LFSR, with a 16-bit seed as input and a 16-bit random number as output. The LFSR is updated according to the following rules:
-
XOR the highest bit and the second-highest bit of the current LFSR to generate a new_bit
.
-
Shift the original LFSR left by one bit, and place new_bit
in the lowest bit.
-
Discard the highest bit.
Testing Process
During testing, we will create a folder named RandomGenerator
, which contains a RandomGenerator.v
file. The content of this file is the RTL source code mentioned above.
Building the RTL into a Python Module
Navigate to the RandomGenerator
folder and execute the following command:
This command does the following:
-
Uses RandomGenerator.v
as the top file and RandomGenerator
as the top module, generating a dynamic library with the Verilator simulator, targeting Python as the output language.
-
Enables waveform output, with the target waveform file being RandomGenerator.fst
.
-
Includes files for driving the example project (-e
), and does not automatically compile after code generation (-autobuild=false
).
-
Outputs the final files to the picker_out_rmg
directory.
The output directory structure is similar to Adder Verification - Generating Intermediate Files , so it will not be elaborated here.
Navigate to the picker_out_rmg
directory and execute the make
command to generate the final files.
Note: The compilation process is similar to Adder Verification - Compilation Process , so it will not be elaborated here.
The final directory structure will be:
Configuring the Test Code
Replace the content of example.py
with the following code.
Running the Test Program
Execute python example.py
in the picker_out_rmg
directory to run the test program. After the execution, if Test Passed
is output, the test is considered passed. After the run is complete, a waveform file RandomGenerator.fst
will be generated, which can be viewed in the terminal using the following command:
gtkwave RandomGenerator.fst
Example output:
4 - Case 3: Dual-Port Stack (Callback)
A dual-port stack is a stack with two ports, each supporting push and pop operations. This case study uses a dual-port stack as an example to demonstrate how to use callback functions to drive the DUT.
Introduction to the Dual-Port Stack
A dual-port stack is a data structure that supports simultaneous operations on two ports. Compared to a traditional single-port stack, a dual-port stack allows simultaneous read and write operations. In scenarios such as multithreaded concurrent read and write operations, the dual-port stack can provide better performance. In this example, we provide a simple dual-port stack implementation, with the source code as follows:
In this implementation, aside from the clock signal (clk
) and reset signal (rst
), there are also input and output signals for the two ports, which have the same interface definition. The meaning of each signal for the ports is as follows:
-
Request Port (in)
-
in_valid
: Input data valid signal
-
in_ready
: Input data ready signal
-
in_data
: Input data
-
in_cmd
: Input command (0: PUSH, 1: POP)
-
Response Port (out)
-
out_valid
: Output data valid signal
-
out_ready
: Output data ready signal
-
out_data
: Output data
-
out_cmd
: Output command (2: PUSH_OKAY, 3: POP_OKAY)
When we want to perform an operation on the stack through a port, we first need to write the required data and command to the input port, and then wait for the output port to return the result.
Specifically, if we want to perform a PUSH operation on the stack, we should first write the data to be pushed into in_data
, then set in_cmd
to 0, indicating a PUSH operation, and set in_valid
to 1, indicating that the input data is valid. Next, we need to wait for in_ready
to be 1, ensuring that the data has been correctly received, at which point the PUSH request has been correctly sent.After the command is successfully sent, we need to wait for the stack’s response information on the response port. When out_valid
is 1, it indicates that the stack has completed the corresponding operation. At this point, we can read the stack’s returned data from out_data
(the returned data of the POP operation will be placed here) and read the stack’s returned command from out_cmd
. After reading the data, we need to set out_ready
to 1 to notify the stack that the returned information has been correctly received.
If requests from both ports are valid simultaneously, the stack will prioritize processing requests from port 0.
Setting Up the Driver Environment
Similar to Case Study 1 and Case Study 2, before testing the dual-port stack, we first need to use the Picker tool to build the RTL code into a Python Module. After the build is complete, we will use a Python script to drive the RTL code for testing.
First, create a file named dual_port_stack.v
and copy the above RTL code into this file. Then, execute the following command in the same folder:
The generated driver environment is located in the picker_out_dual_port_stack
folder. Inside, UT_dual_port_stack
is the generated Python Module, and example.py
is the test script.
You can run the test script with the following commands:
If no errors occur during the run, it means the environment has been set up correctly.
Driving the DUT with Coroutines
Driving the DUT with Callback Functions
In this case, we need to drive a dual-port stack to test its functionality. However, you may quickly realize that the methods used in Cases 1 and 2 are insufficient for driving a dual-port stack. In the previous tests, the DUT had a single execution logic where you input data into the DUT and wait for the output.
However, a dual-port stack is different because its two ports operate with independent execution logic. During the drive process, these two ports might be in entirely different states. For example, while port 0 is waiting for data from the DUT, port 1 might be sending a new request. In such situations, simple sequential execution logic will struggle to drive the DUT effectively.
Therefore, in this case, we will use the dual-port stack as an example to introduce a callback function-based driving method to handle such DUTs.
Introduction to Callback Functions
A callback function is a common programming technique that allows us to pass a function as an argument, which is then called when a certain condition is met. In the generated Python Module, we provide an interface StepRis
for registering callback functions with the internal execution environment. Here’s how it works:
You can run this code directly to see the effect of the callback function.
In the above code, we define a callback function callback
that takes a cycles
parameter and prints the current clock cycle each time it is called. We then register this callback function to the DUT via StepRis
.Once the callback function is registered, each time the Step
function is run, which corresponds to each clock cycle, the callback function is invoked on the rising edge of the clock signal, with the current clock cycle passed as an argument.
Using this approach, we can write different execution logics as callback functions and register multiple callback functions to the DUT, thereby achieving parallel driving of the DUT.
Dual-Port Stack Driven by Callback Functions
To complete a full execution logic using callback functions, we typically write it in the form of a state machine. Each callback function invocation triggers a state change within the state machine, and multiple invocations complete a full execution logic.
Below is an example code for driving a dual-port stack using callback functions:
In the code above, the driving process is implemented such that each port independently drives the DUT, with a random delay added after each request is completed. Each port performs 10 PUSH
operations and 10 POP
operations.When a PUSH
or POP
request takes effect, the corresponding commit_push
or commit_pop
function in the StackModel
is called to simulate stack behavior. After each POP
operation, the data returned by the DUT is compared with the model’s data to ensure consistency.To implement the driving behavior for a single port, we created the SinglePortDriver
class, which includes a method for sending and receiving data. The step_callback
function handles the internal update logic.In the test_stack
function, we create a SinglePortDriver
instance for each port of the dual-port stack, pass the corresponding interfaces, and register the callback function to the DUT using the StepRis
function. When dut.Step(200)
is called, the callback function is automatically invoked each clock cycle to complete the entire driving logic.SinglePortDriver Driving Logic As mentioned earlier, callback functions typically require the execution logic to be implemented as a state machine. Therefore, in the SinglePortDriver
class, the status of each port is recorded, including:
-
IDLE
: Idle state, waiting for the next operation.
-
In the idle state, check the remaining_delay
status to determine whether the current delay has ended. If the delay has ended, proceed with the next operation; otherwise, continue waiting.
-
When the next operation is ready, check the operation_num
status (the number of operations already performed) to determine whether the next operation should be PUSH
or POP
. Then, call the corresponding function to assign values to the port and switch the status to WAIT_REQ_READY
.
-
WAIT_REQ_READY
: Waiting for the request port to be ready.
-
After the request is sent (in_valid
is valid), wait for the in_ready
signal to be valid to ensure the request has been correctly received.
-
Once the request is correctly received, set in_valid
to 0 and out_ready
to 1, indicating the request is complete and ready to receive a response.
-
WAIT_RESP_VALID
: Waiting for the response port to return data.
- After the request is correctly received, wait for the DUT’s response, i.e., wait for the
out_valid
signal to be valid. When the out_valid
signal is valid, it indicates that the response has been generated and the request is complete. Set out_ready
to 0 and switch the status to IDLE
.
Running the Test
Copy the above code into example.py
, and then run the following command:
You can run the test code for this case directly, and you will see output similar to the following:
In the output, you can see the data for each PUSH
and POP
operation, as well as the result of each POP
operation. If there is no error message in the output, it indicates that the test has passed.
Pros and Cons of Callback-Driven Design
By using callbacks, we can achieve parallel driving of the DUT, as demonstrated in this example. We utilized two callbacks to drive two ports with independent execution logic. In simple scenarios, callbacks offer a straightforward method for parallel driving.
However, as shown in this example, even implementing a simple “request-response” flow requires maintaining a significant amount of internal state. Callbacks break down what should be a cohesive execution logic into multiple function calls, adding considerable complexity to both the code writing and debugging processes.
5 - Case 4: Dual-Port Stack (Coroutines)
The dual-port stack is a stack with two ports, each supporting push and pop operations. This case study uses the dual-port stack as an example to demonstrate how to drive a DUT using coroutines.
Introduction to the Dual-Port Stack and Environment Setup
The dual-port stack used in this case is identical to the one implemented in Case 3. Please refer to the Introduction to the Dual-Port Stack and Driver Environment Setup in Case 3 for more details.
Driving the DUT Using Coroutines
In Case 3, we used callbacks to drive the DUT. While callbacks offer a way to perform parallel operations, they break the execution flow into multiple function calls and require maintaining a large amount of intermediate state, making the code more complex to write and debug.
In this case, we will introduce a method of driving the DUT using coroutines. This method not only allows for parallel operations but also avoids the issues associated with callbacks.
Introduction to Coroutines
Coroutines are a form of “lightweight” threading that enables behavior similar to concurrent execution without the overhead of traditional threads. Coroutines operate on a single-threaded event loop, where multiple coroutines can be defined and added to the event loop, with the event loop managing their scheduling.
Typically, a defined coroutine will continue to execute until it encounters an event that requires waiting. At this point, the event loop pauses the coroutine and schedules other coroutines to run. Once the event occurs, the event loop resumes the paused coroutine to continue execution.
For parallel execution in hardware verification, this behavior is precisely what we need. We can create multiple coroutines to handle various verification tasks. We can treat the clock execution as an event, and within each coroutine, wait for this event. When the clock signal arrives, the event loop wakes up all the waiting coroutines, allowing them to continue executing until they wait for the next clock signal.
We use Python’s asyncio
to implement coroutine support:
You can run the above code directly to observe the execution of coroutines. In the code, we use create_task
to create two coroutine tasks and add them to the event loop. Each coroutine task continuously prints a number and waits for the next clock signal.We use dut.RunStep(10)
to create a background clock, which continuously generates clock synchronization signals, allowing other coroutines to continue execution when the clock signal arrives.
Driving the Dual-Port Stack with Coroutines
Using coroutines, we can write the logic for driving each port of the dual-port stack as an independent execution flow without needing to maintain a large amount of intermediate state.
Below is a simple verification code using coroutines:
Similar to Case 3, we define a SinglePortDriver
class to handle the logic for driving a single port. In the main
function, we create two instances of SinglePortDriver
, each responsible for driving one of the two ports. We place the driving processes for both ports in the main function and add them to the event loop using asyncio.create_task
. Finally, we use dut.RunStep(200)
to create a background clock to drive the test.
This code implements the same test logic as in Case 3, where each port performs 10 PUSH and 10 POP operations, followed by a random delay after each operation. As you can see, using coroutines eliminates the need to maintain any intermediate state.
SinglePortDriver Logic In the SinglePortDriver
class, we encapsulate a single operation into the exec_once
function. In the main
function, we first call exec_once(is_push=True)
10 times to complete the PUSH operations, and then call exec_once(is_push=False)
10 times to complete the POP operations.In the exec_once
function, we first call send_req
to send a request, then call receive_resp
to receive the response, and finally wait for a random number of clock signals to simulate a delay.The send_req
and receive_resp
functions have similar logic; they set the corresponding input/output signals to the appropriate values and wait for the corresponding signals to become valid. The implementation can be written according to the execution sequence of the ports.Similarly, we use the StackModel
class to simulate stack behavior. The commit_push
and commit_pop
functions simulate the PUSH and POP operations, respectively, with the POP operation comparing the data.
Running the Test
Copy the above code into example.py
and then execute the following commands:
You can run the test code for this case directly, and you will see output similar to the following:
In the output, you can see the data for each PUSH
and POP
operation, as well as the result of each POP
operation. If there are no error messages in the output, it indicates that the test passed.
Pros and Cons of Coroutine-Driven Design
Using coroutine functions, we can effectively achieve parallel operations while avoiding the issues that come with callback functions. Each independent execution flow can be fully retained as a coroutine, which greatly simplifies code writing.
However, in more complex scenarios, you may find that having many coroutines can make synchronization and timing management between them more complicated. This is especially true when you need to synchronize between two coroutines that do not directly interact with the DUT.
At this point, you’ll need a set of coroutine writing standards and design patterns for verification code to help you write coroutine-based verification code more effectively. Therefore, we provide the mlvp
library, which offers a set of design patterns for coroutine-based verification code. You can learn more about mlvp
and how it can help you write better verification code by visiting here .