This page will briefly introduce what verification is and concepts used in the examples, such as DUT (Design Under Test) and RM (Reference Model).
This is the multi-page printable view of this section. Click here to print.
Environment Usage
- 1: Tool Introduction
- 2: Waveform Generation
- 3: Multi-File Input
- 4: Coverage Statistics
- 5: Integrated Testing Framework
- 5.1: PyTest
- 5.2: Hypothesis
1 - Tool Introduction
To meet the requirements of an open verification environment, we have developed the Picker tool, which is used to convert RTL designs into multi-language interfaces for verification. We will use the environment generated by the Picker tool as the basic verification environment. Next, we will introduce the Picker tool and its basic usage.
Introduction to Picker
Picker is an auxiliary tool for chip verification with two main functions:
-
Packaging RTL Design Verification Modules: Picker can package RTL design verification modules (.v/.scala/.sv) into dynamic libraries and provide programming interfaces in various high-level languages (currently supporting C++, Python, Java, Scala, Golang) to drive the circuit.
-
Automatic UVM-TLM Code Generation: Picker can automate TLM code encapsulation based on the UVM sequence_item provided by the user, providing a communication interface between UVM and other high-level languages such as Python.
This tool allows users to perform chip unit testing based on existing software testing frameworks such as pytest, junit, TestNG, go test, etc. Advantages of Verification Using Picker:
-
No RTL Design Leakage: After conversion by Picker, the original design files (.v) are transformed into binary files (.so). Verification can still be performed without the original design files, and the verifier cannot access the RTL source code.
-
Reduced Compilation Time: When the DUT (Design Under Test) is stable, it only needs to be compiled once (packaged into a .so file).
-
Wide User Base: With support for multiple programming interfaces, it caters to developers of various languages.
-
Utilization of a Rich Software Ecosystem: Supports ecosystems such as Python3, Java, Golang, etc.
-
Automated UVM Transaction Encapsulation: Enables communication between UVM and Python through automated UVM transaction encapsulation.
RTL Simulators Currently Supported by Picker:
-
Verilator
-
Synopsys VCS Working Principle of Picker The main function of
Picker
is to convertVerilog
code into C++ or Python code. For example, using a processor developed with Chisel: first, it is converted into Verilog code through Chisel’s built-in tools, and then Picker provides high-level programming language interfaces.
Python Module Generation
Process of Module Generation
Picker exports Python modules based on C++.
-
Picker is a code generation tool. It first generates project files and then uses make to compile them into binary files.
-
Picker first uses a simulator to compile the RTL code into a C++ class and then compiles it into a dynamic library (see the C++ steps for details).
-
Using the Swig tool, Picker then exports the dynamic library as a Python module based on the C++ header file definitions generated in the previous step.
-
Finally, the generated module is exported to a directory, with other intermediate files being either cleaned up or retained as needed.
Swig is a tool used to export C/C++ code to other high-level languages. It parses C++ header files and generates corresponding intermediate code. For detailed information on the generation process, please refer to the Swig official documentation . For information on how Picker generates C++ classes, please refer to C++ .
- The generated module can be imported and used by other Python programs, with a file structure similar to that of standard Python modules.
Using the Python Module
-
The
--language python
or--lang python
parameter specifies the generation of the Python base library. -
The
--example, -e
parameter generates an executable file containing an example project. -
The
--verbose, -v
parameter preserves intermediate files generated during project creation.
Using the Tool to Generate Python’s DUT Class
Using the simple adder example from Case One:
- Picker automatically generates a base class in Python, referred to as the DUT class. For the adder example, the user needs to write test cases, importing the Python module generated in the previous section and calling its methods to operate on the hardware module. The directory structure is as follows:
picker_out_adder
|-- UT_Adder # Project generated by Picker tool
| |-- Adder.fst.hier
| |-- _UT_Adder.so
| |-- __init__.py
| |-- libDPIAdder.a
| |-- libUTAdder.so
| `-- libUT_Adder.py
`-- example.py # User-written code
- The DUTAdder class has a total of eight methods, as shown below:
class DUTAdder:
def InitClock(name: str) # Initialize clock, with the clock pin name as a parameter, e.g., clk
def Step(i: int = 1) # Advance the circuit by i cycles
def StepRis(callback: Callable, args=None, args=(), kwargs={}) # Set rising edge callback function
def StepFal(callback: Callable, args=None, args=(), kwargs={}) # Set falling edge callback function
def SetWaveform(filename) # Set waveform file
def SetCoverage(filename) # Set code coverage file
def RefreshComb() # Advance combinational circuit
def Finish() # Destroy the circuit
- Pins corresponding to the DUT, such as reset and clock, are represented as member variables in the DUTAdder class. As shown below, pin values can be read and written via the
value
attribute.
from UT_Adder import *
dut = DUTAdder()
dut.a.value = 1 # Assign value to the pin by setting the .value attribute
dut.a[12] = 1 # Assign value to the 12th bit of the input pin a
x = dut.a.value # Read the value of pin a
y = dut.a[12] # Read the 12th bit of pin a
General Flow for Driving DUT
-
Create DUT and Set Pin Modes: By default, pins are assigned values on the rising edge of the next cycle. For combinational logic, you need to set the assignment mode to immediate assignment.
-
Initialize the Clock: This binds the clock pin to the internal
xclock
of the DUT. Combinational logic does not require a clock and can be ignored. -
Reset the Circuit: Most sequential circuits need to be reset.
-
Write Data to DUT Input Pins: Use the
pin.Set(x)
interface orpin.value = x
for assignment. -
Drive the Circuit: Use
Step
for sequential circuits andRefreshComb
for combinational circuits. -
Obtain and Check Outputs of DUT Pins: For example, compare the results with a reference model using assertions.
-
Complete Verification and Destroy DUT: Calling
Finish()
will write waveform, coverage, and other information to files.
The corresponding pseudocode is as follows:
from UT_DUT import *
# 1 Create
dut = DUT()
# 2 Initialize
dut.SetWaveform("test.fst")
dut.InitClock("clock")
# 3 Reset
dut.reset = 1
dut.Step(1)
dut.reset = 0
dut.Step(1)
# 4 Input Data
dut.input_pin1.value = 0x123123
dut.input_pin3.value = "0b1011"
# 5 Drive the Circuit
dut.Step(1)
# 6 Get Results
x = dut.output_pin.value
print("result:", x)
# 7 Destroy
dut.Finish()
Other Data Types
In general, most DUT verification tasks can be accomplished using the interfaces provided by the DUT class. However, for special cases, additional interfaces are needed, such as custom clocks, asynchronous operations, advancing combinational circuits and writing waveforms, and modifying pin properties. In the DUT class generated by Picker, in addition to XData type pin member variables , there are also XClock type xclock and XPort type xport .
class DUTAdder(object):
xport: XPort # Member variable xport for managing all pins in the DUT
xclock: XClock # Member variable xclock for managing the clock
# DUT Pins
a: XData
b: XData
cin: XData
cout: XData
XData Class
- Data in DUT pins usually have an uncertain bit width and can be in one of four states: 0, 1, Z, and X. Picker provides
XData
to represent pin data in the circuit. Main Methods
class XData:
# Split XData, for example, create a separate XData for bits 7-10 of a 32-bit XData
# name: Name, start: Start bit, width: Bit width, e.g., auto sub = a.SubDataRef("sub_pin", 0, 4)
def SubDataRef(name, start, width): XData
def GetWriteMode(): WriteMode # Get the write mode of XData: Imme (immediate), Rise (rising edge), Fall (falling edge)
def SetWriteMode(mode: WriteMode) # Set the write mode of XData, e.g., a.SetWriteMode(WriteMode::Imme)
def DataValid(): bool # Check if the data is valid (returns false if value contains X or Z states, otherwise true)
def W(): int # Get the bit width of XData (0 indicates XData is of Verilog's logic type, otherwise it's the width of Vec type)
def U(): int # Get the unsigned value of XData (e.g., x = a.value)
def S(): int # Get the signed value of XData
def String(): str # Convert XData to a hexadecimal string, e.g., "0x123ff", if ? appears, it means X or Z state in the corresponding 4 bits
def Equal(xdata): bool # Compare two XData instances for equality
def Set(value) # Assign value to XData, value can be XData, string, int, bytes, etc.
def GetBytes(): bytes # Get the value of XData in bytes format
def Connect(xdata): bool # Connect two XData instances; only In and Out types can be connected. When Out data changes, In type XData will be automatically updated.
def IsInIO(): bool # Check if XData is of In type, which can be read and written
def IsOutIO(): bool # Check if XData is of Out type, which is read-only
def IsBiIO(): bool # Check if XData is of Bi type, which can be read and written
def IsImmWrite(): bool # Check if XData is in Imm write mode
def IsRiseWrite(): bool # Check if XData is in Rise write mode
def IsFallWrite(): bool # Check if XData is in Fall write mode
def AsImmWrite() # Change XData's write mode to Imm
def AsRiseWrite() # Change XData's write mode to Rise
def AsFallWrite() # Change XData's write mode to Fall
def AsBiIO() # Change XData to Bi type
def AsInIO() # Change XData to In type
def AsOutIO() # Change XData to Out type
def FlipIOType() # Invert the IO type of XData, e.g., In to Out or Out to In
def Invert() # Invert the data in XData
def At(index): PinBind # Get the pin at index, e.g., x = a.At(12).Get() or a.At(12).Set(1)
def AsBinaryString() # Convert XData's data to a binary string, e.g., "1001011"
To simplify assignment operations, XData
has overloaded property assignment for Set(value)
and U()
methods, allowing assignments and retrievals with pin.value = x
and x = pin.value
.
# Access with .value
# a is of XData type
a.value = 12345 # Decimal assignment
a.value = 0b11011 # Binary assignment
a.value = 0o12345 # Octal assignment
a.value = 0x12345 # Hexadecimal assignment
a.value = -1 # Assign all bits to 1, a.value = x is equivalent to a.Set(x)
a[31] = 0 # Assign value to bit 31
a.value = "x" # Assign high impedance state
a.value = "z" # Assign unknown state
x = a.value # Retrieve value, equivalent to x = a.U()
XPort Class
- Directly operating on
XData
is clear and intuitive when dealing with a few pins. However, managing multipleXData
instances can be cumbersome.XPort
is a wrapper aroundXData
that allows centralized management of multipleXData
instances. It also provides methods for convenient batch management. Initialization and Adding Pins
port = XPort("p") # Create an XPort instance with prefix p
Main Methods
class XPort:
def XPort(prefix = "") # Create a port with prefix prefix, e.g., p = XPort("tile_link_")
def PortCount(): int # Get the number of pins in the port (i.e., number of bound XData instances)
def Add(pin_name, XData) # Add a pin, e.g., p.Add("reset", dut.reset)
def Del(pin_name) # Delete a pin
def Connect(xport2) # Connect two ports
def NewSubPort(std::string subprefix): XPort # Create a sub-port with all pins starting with subprefix
def Get(key, raw_key = False): XData # Get XData
def SetZero() # Set all XData in the port to 0
XClock Class
XClock
is a wrapper for the circuit clock used to drive the circuit. In traditional simulation tools (e.g., Verilator), you need to manually assign values toclk
and update the state using functions likestep_eval
. Our tool provides methods to bind the clock directly toXClock
, allowing theStep()
method to simultaneously update theclk
and circuit state. Initialization and Adding Pins
# Initialization
clk = XClock(stepfunc) # Parameter stepfunc is the circuit advancement method provided by DUT backend, e.g., Verilator's step_eval
Main Methods
class XClock:
def Add(xdata) # Bind Clock with xdata, e.g., clock.Add(dut.clk)
def Add(xport) # Bind Clock with XData
def RefreshComb() # Advance circuit state without advancing time or dumping waveform
def RefreshCombT() # Advance circuit state (advance time and dump waveform)
def Step(int s = 1) # Advance the circuit by s clock cycles, DUT.Step = DUT.xclock.Step
def StepRis(func, args=(), kwargs={})
2 - Waveform Generation
Usage
When using the Picker tool to encapsulate the DUT, use the -w [wave_file]
option to specify the waveform file to be saved. Different waveform file types are supported for different backend simulators, as follows:
- Verilator
.vcd
format waveform file..fst
format waveform file, a more efficient compressed file.
- VCS
.fsdb
format waveform file, a more efficient compressed file.
Note that if you choose to generate the libDPI_____.so
file yourself, the waveform file format is not restricted by the above constraints. The waveform file format is determined when the simulator constructs libDPI.so
, so if you generate it yourself, you need to specify the waveform file format using the corresponding simulator’s configuration.
Python Example
Normally, the DUT needs to be explicitly declared complete to notify the simulator to perform post-processing tasks (writing waveform, coverage files, etc.). In Python, after completing all tests, call the .finalize()
method of the DUT to notify the simulator that the task is complete, and then flush the files to disk.
Using the Adder Example, the test program is as follows:
from UT_Adder import *
if __name__ == "__main__":
dut = DUTAdder()
for i in range(10):
dut.a.value = i * 2
dut.b.value = int(i / 4)
dut.Step(1)
print(dut.sum.value, dut.cout.value)
dut.finalize() # flush the wave file to disk
After the run is completed, the waveform file with the specified name will be generated.
Viewing Results
GTKWave
Use GTKWave to open fst
or vcd
waveform files to view the waveform.
Verdi
Use Verdi to open fsdb
or vcd
waveform files to view the waveform.
3 - Multi-File Input
Multi-File Input and Output
In many cases, a module in one file may instantiate modules in other files. In such cases, you can use the picker tool’s -f
option to process multiple Verilog source files. For example, suppose you have three source files: Cache.sv
, CacheStage.sv
, and CacheMeta.sv
:
File List
Cache.sv
// In
module Cache(
...
);
CacheStage s1(
...
);
CacheStage s2(
...
);
CacheStage s3(
...
);
CacheMeta cachemeta(
...
);
endmodule
CacheStage.sv
// In CacheStage.sv
module CacheStage(
...
);
...
endmodule
CacheMeta.sv
// In CacheMeta.sv
module CacheMeta(
...
);
...
endmodule
Usage
In this case, the module under test is Cache, which is in Cache.sv
. You can generate the DUT using the following command:
Command Line Specification
picker export Cache.sv --fs CacheStage.sv,CacheMeta.sv --sname Cache
Specification through a File List File
You can also use a .txt file to specify multiple input files:
picker export Cache.sv --fs src.txt --sname Cache
Where the contents of src.txt
are:
CacheStage.sv
CacheMeta.sv
Notes
- It is important to note that even when using multiple file inputs, you still need to specify the file containing the top-level module under test, as shown in the example above with
Cache.sv
. - When using multiple file inputs, Picker will pass all files to the simulator, which will compile them simultaneously. Therefore, it is necessary to ensure that the module names in all files are unique.
4 - Coverage Statistics
The Picker tool supports generating code line coverage reports, and the MLVP(https://github.com/XS-MLVP/mlvp)project supports generating functional coverage reports.
Code Line Coverage
Currently, the Picker tool supports generating code line coverage reports based on the Verilator simulator.
Verilator
The Verilator simulator provides coverage support.
The implementation is as follows:
- Use the
verilator_coverage
tool to process or merge coverage databases, ultimately generating acoverage.info
file for multiple DUTs. - Use the
genhtml
command of the lcov tool based oncoverage.info
and RTL code source files to generate a complete code coverage report.
The process is as follows:
- Enable the COVERAGE feature when generating the DUT with Picker (add the
-c
option). - After the simulator runs, a coverage database file
V{DUT_NAME}.dat
will be generated afterdut.finalize()
is called. - Use the write-info function of
verilator_coverage
to convert it to a.info
file. - Use the
genhtml
function oflcov
to generate an HTML report using the.info
file and the RTL source files specified in the file.
Note: The RTL source files specified in the file refer to the source file paths used when generating the
DUT
, and these paths need to be valid in the current environment. In simple terms, all.sv/.v
files used for compilation need to exist in the current environment, and the directory remains unchanged.
verilator_coverage
The verilator_coverage
tool is used to process coverage data generated by the DUT
after running .dat
files. The tool can process and merge multiple .dat
files and has two main functions:
-
Generate a
.info
file based on the.dat
file for subsequent generation of a web page report.-
-annotate <output_dir>
:Present the coverage situation in the source file in annotated form, and save the result tooutput_dir
. The format is as follows:100000 input logic a; // Begins with whitespace, because // number of hits (100000) is above the limit. %000000 input logic b; // Begins with %, because // number of hits (0) is below the limit.
-
-annotate-min <count>
:Specify the limit as count for the above.
-
-
Combine the
.dat
file with the source code file, and write the coverage data in annotated form into the specified directory.-write <merged-datafile> -read <datafiles>
:Merge several .dat (datafiles
) files into one .dat file.-write-info <merged-info> -read <datafiles>
:Merge several .dat (datafiles
) files into one .info file.
genhtml
The genhtml
provided by the lcov
package can export a more readable HTML report from the .info file. The command format is: genhtml [OPTIONS] <infofiles>
.
It is recommended to use the -o <outputdir>
option to output the results to a specified directory.
For example, in theAddr project.
Usage Example
If you enable the -c
option when using Picker, after the simulation ends, a V{DUT_NAME}.dat
file will be generated. And there will be a Makefile in the top-level directory, which contains the command to generate the coverage report.
The command is as follows:
coverage:
...
verilator_coverage -write-info coverage.info ./${TARGET}/V${PROJECT}_coverage.dat
genhtml coverage.info --output-directory coverage
...
Enter make coverage
in the shell, which will generate coverage.info based on the generated .dat file and then use genhtml
to generate an html report in the coverage directory.
VCS
Documentation for VCS is currently being finalized.
5 - Integrated Testing Framework
In traditional chip verification practices, frameworks like UVM are widely adopted. Although they provide a comprehensive set of verification methodologies, they are typically confined to specific hardware description languages and simulation environments. Our tool breaks these limitations by converting simulation code into C++ or Python, allowing us to leverage software verification tools for more comprehensive testing. Given Python’s robust ecosystem, this project primarily uses Python as an example, briefly introducing two classic software testing frameworks: Pytest and Hypothesis. Pytest handles various testing needs with its simple syntax and rich features. Meanwhile, Hypothesis enhances the thoroughness and depth of testing by generating test cases that uncover unexpected edge cases. Our project is designed from the outset to be compatible with various modern software testing frameworks. We encourage you to explore the potential of these tools and apply them to your testing processes. Through hands-on practice, you will gain a deeper understanding of how these tools can enhance code quality and reliability. Let’s work together to improve the quality of chip development.
5.1 - PyTest
Software Testing
Before we start with pytest, let’s understand software testing. Software testing generally involves the following four aspects:
- Unit Testing: Also known as module testing, it involves checking the correctness of program modules, which are the smallest units in software design.
- Integration Testing: Also known as assembly testing, it usually builds on unit testing by sequentially and incrementally testing all program modules, focusing on the interface parts of different modules.
- System Testing: It treats the entire software system as a whole for testing, including testing the functionality, performance, and the software’s running environment.
- Acceptance Testing: Refers to testing the entire system according to the project task book, contract, and acceptance criteria agreed upon by both the supply and demand sides, to determine whether to accept or reject the system.
pytest was initially designed as a unit testing framework, but it also provides many features that allow it to be used for a wider range of testing, including integration testing and system testing. It is a very mature full-featured Python testing framework. It simplifies test writing and execution by collecting test functions and modules and providing a rich assertion library. It is a very mature and powerful Python testing framework with the following key features:
- Simple and Flexible: Pytest is easy to get started with and is flexible.
- Supports Parameterization: You can easily provide different parameters for test cases.
- Full-featured: Pytest not only supports simple unit testing but can also handle complex functional testing. You can even use it for automation testing, such as Selenium or Appium testing, as well as interface automation testing (combining Pytest with the Requests library).
- Rich Plugin Ecosystem: Pytest has many third-party plugins, and you can also customize extensions. Some commonly used plugins include:
pytest-selenium
: Integrates Selenium.pytest-html
: Generates HTML test reports.pytest-rerunfailures
: Repeats test cases in case of failure.pytest-xdist
: Supports multi-CPU distribution.
- Well Integrated with Jenkins.
- Supports Allure Report Framework.
This article will briefly introduce the usage of pytest based on testing requirements. The complete manual is available here for students to study in depth.
Installing Pytest
# Install pytest:
pip install pytest
# Upgrade pytest
pip install -U pytest
# Check pytest version
pytest --version
# Check installed package list
pip list
# Check pytest help documentation
pytest -h
# Install third-party plugins
pip install pytest-sugar
pip install pytest-rerunfailures
pip install pytest-xdist
pip install pytest-assume
pip install pytest-html
Using Pytest
Naming Convention
# When using pytest, our module names are usually prefixed with test or end with test. You can also modify the configuration file to customize the naming convention.
# test_*.py or *_test.py
test_demo1
demo2_test
# The class name in the module must start with Test and cannot have an init method.
class TestDemo1:
class TestLogin:
# The test methods defined in the class must start with test_
test_demo1(self)
test_demo2(self)
# Test Case
class test_one:
def test_demo1(self):
print("Test Case 1")
def test_demo2(self):
print("Test Case 2")
Pytest Parameters
pytest supports many parameters, which can be viewed using the help command.
pytest -help
Here are some commonly used ones:
-m: Specify multiple tag names with an expression. pytest provides a decorator @pytest.mark.xxx for marking tests and grouping them (xxx is the group name you defined), so you can quickly select and run them, with different groups separated by and or or.
-v: Outputs more detailed information during runtime. Without -v, the runtime does not display the specific test case names being run; with -v, it prints out the specific test cases in the console.
-q: Similar to the verbosity in unittest, used to simplify the runtime output. When running tests with -q, only simple runtime information is displayed, for example:
.s.. [100%]
3 passed, 1 skipped in 9.60s
-k: You can run specified test cases using an expression. It is a fuzzy match, with and or or separating keywords, and the matching range includes file names, class names, and function names.
-x: Exit the test if one test case fails. This is very useful for debugging. When a test fails, stop running the subsequent tests.
-s: Display print content. When running test scripts, we often add some print content for debugging or printing some content. However, when running pytest, this content is not displayed. If you add -s, it will be displayed.
pytest test_se.py -s
Selecting Test Cases to Execute with Pytest
In Pytest, you can select and execute test cases based on different dimensions such as test folders, test files, test classes, and test methods.
- Execute by test folder
# Execute all test cases in the current folder and subfolders
pytest .
# Execute all test cases in the tests folder and its subfolders, which are at the same level as the current folder
pytest ../tests
# Execute by test file
# Run all test cases in test_se.py
pytest test_se.py
# Execute by test class, must be in the following format:
pytest file_name.py::TestClass, where "::" is the separator used to separate the test module and test class.
# Run all test cases under the class named TestSE in the test_se.py file
pytest test_se.py::TestSE
# Execute by test method, must be in the following format:
pytest file_name.py::TestClass::TestMethod, where "::" is the separator used to separate the test module, test class, and test method.
# Run the test case named test_get_new_message under the class named TestSE in the test_se.py file
pytest test_se.py::TestSE::test_get_new_message
# The above methods of selecting test cases are all on the **command line**. If you want to execute directly in the test program, you can directly call pytest.main(), the format is:
pytest.main([module.py::class::method])
In addition, Pytest also supports multiple ways to control the execution of test cases, such as filtering execution, running in multiple processes, retrying execution, etc.
Writing Validation with Pytest
- During testing, we use the previously validated adder. Go to the Adder folder, create a new test_adder.py file in the picker_out_adder directory, with the following content:
# Import test modules and required libraries
from UT_Adder import *
import pytest
import ctypes
import random
# Use pytest fixture to initialize and clean up resources
@pytest.fixture
def adder():
# Create an instance of DUTAdder, load the dynamic link library
dut = DUTAdder()
# Execute one clock step to prepare the DUT
dut.Step(1)
# The code after the yield statement will be executed after the test ends, used to clean up resources
yield dut
# Clean up DUT resources and generate test coverage reports and waveforms
dut.Finish()
class TestFullAdder:
# Define full_adder as a static method, as it does not depend on class instances
@staticmethod
def full_adder(a, b, cin):
cin = cin & 0b1
Sum = ctypes.c_uint64(a).value
Sum += ctypes.c_uint64(b).value + cin
Cout = (Sum >> 64) & 0b1
Sum &= 0xffffffffffffffff
return Sum, Cout
# Use the pytest.mark.usefixtures decorator to specify the fixture to use
@pytest.mark.usefixtures("adder")
# Define the test method, where adder is injected by pytest through the fixture
def test_adder(self, adder):
# Perform multiple random tests
for _ in range(114514):
# Generate random 64-bit a, b, and 1-bit cin
a = random.getrandbits(64)
b = random.getrandbits(64)
cin = random.getrandbits(1)
# Set the input of the DUT
adder.a.value = a
adder.b.value = b
adder.cin.value = cin
# Execute one clock step
adder.Step(1)
# Calculate the expected result using a static method
sum, cout = self.full_adder(a, b, cin)
# Assert that the output of the DUT is the same as the expected result
assert sum == adder.sum.value
assert cout == adder.cout.value
if __name__ == "__main__":
pytest.main(['-v', 'test_adder.py::TestFullAdder'])
- After running the test, the output is as follows:
collected 1 item
test_adder.py ✓ 100% ██████████
Results (4.33s):
The successful test indicates that after 114514 loops, our device has not found any bugs for now. However, using randomly generated test cases with multiple loops consumes a considerable amount of resources, and these randomly generated test cases may not effectively cover all boundary conditions. In the next section, we will introduce a more efficient method for generating test cases.
5.2 - Hypothesis
Hypothesis
In the previous section, we manually wrote test cases and specified inputs and expected outputs for each case. This method has some issues, such as incomplete test case coverage and the tendency to overlook boundary conditions. Hypothesis is a Python library for property-based testing. Its main goal is to make testing simpler, faster, and more reliable. It uses a method called property-based testing, where you can write some hypotheses for your code, and Hypothesis will automatically generate test cases to verify these hypotheses. This makes it easier to write comprehensive and efficient tests. Hypothesis can automatically generate various types of input data, including basic types (e.g., integers, floats, strings), container types (e.g., lists, sets, dictionaries), and custom types. It tests based on the properties (assertions) you provide. If a test fails, it will try to narrow down the input data to find the smallest failing case. With Hypothesis, you can better cover the boundary conditions of your code and uncover errors you might not have considered. This helps improve the quality and reliability of your code.
Basic Concepts
- Test Function: The function or method to be tested.
- Properties: Conditions that the test function should satisfy. Properties are applied to the test function as decorators.
- Strategy: A generator for test data. Hypothesis provides a range of built-in strategies, such as integers, strings, lists, etc. You can also define custom strategies.
- Test Generator: A function that generates test data based on strategies. Hypothesis automatically generates test data and passes it as parameters to the test function.
This article will briefly introduce Hypothesis based on testing requirements. The complete manual is available for in-depth study.
Installation
Install with pip and import in Python to use:
pip install hypothesis
import hypothesis
Basic Usage
Properties and Strategies
Hypothesis uses property decorators to define the properties of test functions. The most common decorator is @given, which specifies the properties the test function should satisfy. We can define a test function test_addition using the @given decorator and add properties to x. The test generator will automatically generate test data for the function and pass it as parameters, for example:
def addition(number: int) -> int:
return number + 1
@given(x=integers(), y=integers())
def test_addition(x, y):
assert x + 1 == addition(1)
In this example, integers() is a built-in strategy for generating integer test data. Hypothesis offers a variety of built-in strategies for generating different types of test data. Besides integers(), there are strategies for strings, booleans, lists, dictionaries, etc. For instance, using the text() strategy to generate string test data and using lists(text()) to generate lists of strings:
@given(s=text(), l=lists(text()))
def test_string_concatenation(s, l):
result = s + "".join(l)
assert len(result) == len(s) + sum(len(x) for x in l)
You can also define custom strategies to generate specific types of test data, for example, a strategy for non-negative integers:
def non_negative_integers():
return integers(min_value=0)
@given(x=non_negative_integers())
def test_positive_addition(x):
assert x + 1 > x
Expectations
We can use expect to specify the expected result of a function:
@given(x=integers())
def test_addition(x):
expected = x + 1
actual = addition(x)
Hypotheses and Assertions
When using Hypothesis for testing, we can use standard Python assertions to verify the properties of the test function. Hypothesis will automatically generate test data and run the test function based on the properties defined in the decorator. If an assertion fails, Hypothesis will try to narrow down the test data to find the smallest failing case.
Suppose we have a string reversal function. We can use an assert statement to check if reversing a string twice equals itself:
def test_reverse_string(s):
expected = x + 1
actual = addition(x)
assert actual == expected
Writing Tests
-
Tests in Hypothesis consist of two parts: a function that looks like a regular test in your chosen framework but with some extra parameters, and a @given decorator specifying how to provide those parameters. Here’s an example of how to use it to verify a full adder, which we tested previously:
-
Based on the previous section’s code, we modify the method of generating test cases from random numbers to the integers() method. The modified code is as follows:
from UT_Adder import *
import pytest
import ctypes
import random
from hypothesis import given, strategies as st
# Initializing and Cleaning Up Resources Using pytest Fixture
from UT_Adder import *
import pytest
import ctypes
from hypothesis import given, strategies as st
# Using pytest fixture to initialize and clean up resources
@pytest.fixture(scope="class")
def adder():
# Create DUTAdder instance and load dynamic library
dut = DUTAdder()
# Perform a clock step to prepare the DUT
dut.Step(1)
# Code after yield executes after tests finish, for cleanup
yield dut
# Clean up DUT resources and generate coverage report and waveform
dut.finalize()
class TestFullAdder:
# Define full_adder as a static method, as it doesn't depend on class instance
@staticmethod
def full_adder(a, b, cin):
cin = cin & 0b1
Sum = ctypes.c_uint64(a).value
Sum += ctypes.c_uint64(b).value + cin
Cout = (Sum >> 64) & 0b1
Sum &= 0xffffffffffffffff
return Sum, Cout
# Use Hypothesis to automatically generate test cases
@given(
a=st.integers(min_value=0, max_value=0xffffffffffffffff),
b=st.integers(min_value=0, max_value=0xffffffffffffffff),
cin=st.integers(min_value=0, max_value=1)
)
# Define test method, adder parameter injected by pytest via fixture
def test_full_adder_with_hypothesis(self, adder, a, b, cin):
# Calculate expected sum and carry
sum_expected, cout_expected = self.full_adder(a, b, cin)
# Set DUT inputs
adder.a.value = a
adder.b.value = b
adder.cin.value = cin
# Perform a clock step
adder.Step(1)
# Assert DUT outputs match expected results
assert sum_expected == adder.sum.value
assert cout_expected == adder.cout.value
if __name__ == "__main__":
# Run specified tests in verbose mode
pytest.main(['-v', 'test_adder.py::TestFullAdder'])
In this example, the @given decorator and strategies are used to generate random data that meets specified conditions. st.integers() is a strategy for generating integers within a specified range, used to generate numbers between 0 and 0xffffffffffffffff for a and b, and between 0 and 1 for cin. Hypothesis will automatically rerun this test multiple times, each time using different random inputs, helping reveal potential boundary conditions or edge cases.
- Run the tests, and the output will be as follows:
collected 1 item
test_adder.py ✓ 100% ██████████
Results (0.42s):
1 passed
As we can see, the tests were completed in a short amount of time.