In traditional chip verification practices, frameworks like UVM are widely adopted. Although they provide a comprehensive set of verification methodologies, they are typically confined to specific hardware description languages and simulation environments. Our tool breaks these limitations by converting simulation code into C++ or Python, allowing us to leverage software verification tools for more comprehensive testing. Given Python’s robust ecosystem, this project primarily uses Python as an example, briefly introducing two classic software testing frameworks: Pytest and Hypothesis. Pytest handles various testing needs with its simple syntax and rich features. Meanwhile, Hypothesis enhances the thoroughness and depth of testing by generating test cases that uncover unexpected edge cases. Our project is designed from the outset to be compatible with various modern software testing frameworks. We encourage you to explore the potential of these tools and apply them to your testing processes. Through hands-on practice, you will gain a deeper understanding of how these tools can enhance code quality and reliability. Let’s work together to improve the quality of chip development.
This is the multi-page printable view of this section. Click here to print.
Integrated Testing Framework
- 1: PyTest
- 2: Hypothesis
1 - PyTest
Software Testing
Before we start with pytest, let’s understand software testing. Software testing generally involves the following four aspects:
- Unit Testing: Also known as module testing, it involves checking the correctness of program modules, which are the smallest units in software design.
- Integration Testing: Also known as assembly testing, it usually builds on unit testing by sequentially and incrementally testing all program modules, focusing on the interface parts of different modules.
- System Testing: It treats the entire software system as a whole for testing, including testing the functionality, performance, and the software’s running environment.
- Acceptance Testing: Refers to testing the entire system according to the project task book, contract, and acceptance criteria agreed upon by both the supply and demand sides, to determine whether to accept or reject the system.
pytest was initially designed as a unit testing framework, but it also provides many features that allow it to be used for a wider range of testing, including integration testing and system testing. It is a very mature full-featured Python testing framework. It simplifies test writing and execution by collecting test functions and modules and providing a rich assertion library. It is a very mature and powerful Python testing framework with the following key features:
- Simple and Flexible: Pytest is easy to get started with and is flexible.
- Supports Parameterization: You can easily provide different parameters for test cases.
- Full-featured: Pytest not only supports simple unit testing but can also handle complex functional testing. You can even use it for automation testing, such as Selenium or Appium testing, as well as interface automation testing (combining Pytest with the Requests library).
- Rich Plugin Ecosystem: Pytest has many third-party plugins, and you can also customize extensions. Some commonly used plugins include:
pytest-selenium
: Integrates Selenium.pytest-html
: Generates HTML test reports.pytest-rerunfailures
: Repeats test cases in case of failure.pytest-xdist
: Supports multi-CPU distribution.
- Well Integrated with Jenkins.
- Supports Allure Report Framework.
This article will briefly introduce the usage of pytest based on testing requirements. The complete manual is available here for students to study in depth.
Installing Pytest
# Install pytest:
pip install pytest
# Upgrade pytest
pip install -U pytest
# Check pytest version
pytest --version
# Check installed package list
pip list
# Check pytest help documentation
pytest -h
# Install third-party plugins
pip install pytest-sugar
pip install pytest-rerunfailures
pip install pytest-xdist
pip install pytest-assume
pip install pytest-html
Using Pytest
Naming Convention
# When using pytest, our module names are usually prefixed with test or end with test. You can also modify the configuration file to customize the naming convention.
# test_*.py or *_test.py
test_demo1
demo2_test
# The class name in the module must start with Test and cannot have an init method.
class TestDemo1:
class TestLogin:
# The test methods defined in the class must start with test_
test_demo1(self)
test_demo2(self)
# Test Case
class test_one:
def test_demo1(self):
print("Test Case 1")
def test_demo2(self):
print("Test Case 2")
Pytest Parameters
pytest supports many parameters, which can be viewed using the help command.
pytest -help
Here are some commonly used ones:
-m: Specify multiple tag names with an expression. pytest provides a decorator @pytest.mark.xxx for marking tests and grouping them (xxx is the group name you defined), so you can quickly select and run them, with different groups separated by and or or.
-v: Outputs more detailed information during runtime. Without -v, the runtime does not display the specific test case names being run; with -v, it prints out the specific test cases in the console.
-q: Similar to the verbosity in unittest, used to simplify the runtime output. When running tests with -q, only simple runtime information is displayed, for example:
.s.. [100%]
3 passed, 1 skipped in 9.60s
-k: You can run specified test cases using an expression. It is a fuzzy match, with and or or separating keywords, and the matching range includes file names, class names, and function names.
-x: Exit the test if one test case fails. This is very useful for debugging. When a test fails, stop running the subsequent tests.
-s: Display print content. When running test scripts, we often add some print content for debugging or printing some content. However, when running pytest, this content is not displayed. If you add -s, it will be displayed.
pytest test_se.py -s
Selecting Test Cases to Execute with Pytest
In Pytest, you can select and execute test cases based on different dimensions such as test folders, test files, test classes, and test methods.
- Execute by test folder
# Execute all test cases in the current folder and subfolders
pytest .
# Execute all test cases in the tests folder and its subfolders, which are at the same level as the current folder
pytest ../tests
# Execute by test file
# Run all test cases in test_se.py
pytest test_se.py
# Execute by test class, must be in the following format:
pytest file_name.py::TestClass, where "::" is the separator used to separate the test module and test class.
# Run all test cases under the class named TestSE in the test_se.py file
pytest test_se.py::TestSE
# Execute by test method, must be in the following format:
pytest file_name.py::TestClass::TestMethod, where "::" is the separator used to separate the test module, test class, and test method.
# Run the test case named test_get_new_message under the class named TestSE in the test_se.py file
pytest test_se.py::TestSE::test_get_new_message
# The above methods of selecting test cases are all on the **command line**. If you want to execute directly in the test program, you can directly call pytest.main(), the format is:
pytest.main([module.py::class::method])
In addition, Pytest also supports multiple ways to control the execution of test cases, such as filtering execution, running in multiple processes, retrying execution, etc.
Writing Validation with Pytest
- During testing, we use the previously validated adder. Go to the Adder folder, create a new test_adder.py file in the picker_out_adder directory, with the following content:
# Import test modules and required libraries
from UT_Adder import *
import pytest
import ctypes
import random
# Use pytest fixture to initialize and clean up resources
@pytest.fixture
def adder():
# Create an instance of DUTAdder, load the dynamic link library
dut = DUTAdder()
# Execute one clock step to prepare the DUT
dut.Step(1)
# The code after the yield statement will be executed after the test ends, used to clean up resources
yield dut
# Clean up DUT resources and generate test coverage reports and waveforms
dut.Finish()
class TestFullAdder:
# Define full_adder as a static method, as it does not depend on class instances
@staticmethod
def full_adder(a, b, cin):
cin = cin & 0b1
Sum = ctypes.c_uint64(a).value
Sum += ctypes.c_uint64(b).value + cin
Cout = (Sum >> 64) & 0b1
Sum &= 0xffffffffffffffff
return Sum, Cout
# Use the pytest.mark.usefixtures decorator to specify the fixture to use
@pytest.mark.usefixtures("adder")
# Define the test method, where adder is injected by pytest through the fixture
def test_adder(self, adder):
# Perform multiple random tests
for _ in range(114514):
# Generate random 64-bit a, b, and 1-bit cin
a = random.getrandbits(64)
b = random.getrandbits(64)
cin = random.getrandbits(1)
# Set the input of the DUT
adder.a.value = a
adder.b.value = b
adder.cin.value = cin
# Execute one clock step
adder.Step(1)
# Calculate the expected result using a static method
sum, cout = self.full_adder(a, b, cin)
# Assert that the output of the DUT is the same as the expected result
assert sum == adder.sum.value
assert cout == adder.cout.value
if __name__ == "__main__":
pytest.main(['-v', 'test_adder.py::TestFullAdder'])
- After running the test, the output is as follows:
collected 1 item
test_adder.py ✓ 100% ██████████
Results (4.33s):
The successful test indicates that after 114514 loops, our device has not found any bugs for now. However, using randomly generated test cases with multiple loops consumes a considerable amount of resources, and these randomly generated test cases may not effectively cover all boundary conditions. In the next section, we will introduce a more efficient method for generating test cases.
2 - Hypothesis
Hypothesis
In the previous section, we manually wrote test cases and specified inputs and expected outputs for each case. This method has some issues, such as incomplete test case coverage and the tendency to overlook boundary conditions. Hypothesis is a Python library for property-based testing. Its main goal is to make testing simpler, faster, and more reliable. It uses a method called property-based testing, where you can write some hypotheses for your code, and Hypothesis will automatically generate test cases to verify these hypotheses. This makes it easier to write comprehensive and efficient tests. Hypothesis can automatically generate various types of input data, including basic types (e.g., integers, floats, strings), container types (e.g., lists, sets, dictionaries), and custom types. It tests based on the properties (assertions) you provide. If a test fails, it will try to narrow down the input data to find the smallest failing case. With Hypothesis, you can better cover the boundary conditions of your code and uncover errors you might not have considered. This helps improve the quality and reliability of your code.
Basic Concepts
- Test Function: The function or method to be tested.
- Properties: Conditions that the test function should satisfy. Properties are applied to the test function as decorators.
- Strategy: A generator for test data. Hypothesis provides a range of built-in strategies, such as integers, strings, lists, etc. You can also define custom strategies.
- Test Generator: A function that generates test data based on strategies. Hypothesis automatically generates test data and passes it as parameters to the test function.
This article will briefly introduce Hypothesis based on testing requirements. The complete manual is available for in-depth study.
Installation
Install with pip and import in Python to use:
pip install hypothesis
import hypothesis
Basic Usage
Properties and Strategies
Hypothesis uses property decorators to define the properties of test functions. The most common decorator is @given, which specifies the properties the test function should satisfy. We can define a test function test_addition using the @given decorator and add properties to x. The test generator will automatically generate test data for the function and pass it as parameters, for example:
def addition(number: int) -> int:
return number + 1
@given(x=integers(), y=integers())
def test_addition(x, y):
assert x + 1 == addition(1)
In this example, integers() is a built-in strategy for generating integer test data. Hypothesis offers a variety of built-in strategies for generating different types of test data. Besides integers(), there are strategies for strings, booleans, lists, dictionaries, etc. For instance, using the text() strategy to generate string test data and using lists(text()) to generate lists of strings:
@given(s=text(), l=lists(text()))
def test_string_concatenation(s, l):
result = s + "".join(l)
assert len(result) == len(s) + sum(len(x) for x in l)
You can also define custom strategies to generate specific types of test data, for example, a strategy for non-negative integers:
def non_negative_integers():
return integers(min_value=0)
@given(x=non_negative_integers())
def test_positive_addition(x):
assert x + 1 > x
Expectations
We can use expect to specify the expected result of a function:
@given(x=integers())
def test_addition(x):
expected = x + 1
actual = addition(x)
Hypotheses and Assertions
When using Hypothesis for testing, we can use standard Python assertions to verify the properties of the test function. Hypothesis will automatically generate test data and run the test function based on the properties defined in the decorator. If an assertion fails, Hypothesis will try to narrow down the test data to find the smallest failing case.
Suppose we have a string reversal function. We can use an assert statement to check if reversing a string twice equals itself:
def test_reverse_string(s):
expected = x + 1
actual = addition(x)
assert actual == expected
Writing Tests
-
Tests in Hypothesis consist of two parts: a function that looks like a regular test in your chosen framework but with some extra parameters, and a @given decorator specifying how to provide those parameters. Here’s an example of how to use it to verify a full adder, which we tested previously:
-
Based on the previous section’s code, we modify the method of generating test cases from random numbers to the integers() method. The modified code is as follows:
from UT_Adder import *
import pytest
import ctypes
import random
from hypothesis import given, strategies as st
# Initializing and Cleaning Up Resources Using pytest Fixture
from UT_Adder import *
import pytest
import ctypes
from hypothesis import given, strategies as st
# Using pytest fixture to initialize and clean up resources
@pytest.fixture(scope="class")
def adder():
# Create DUTAdder instance and load dynamic library
dut = DUTAdder()
# Perform a clock step to prepare the DUT
dut.Step(1)
# Code after yield executes after tests finish, for cleanup
yield dut
# Clean up DUT resources and generate coverage report and waveform
dut.finalize()
class TestFullAdder:
# Define full_adder as a static method, as it doesn't depend on class instance
@staticmethod
def full_adder(a, b, cin):
cin = cin & 0b1
Sum = ctypes.c_uint64(a).value
Sum += ctypes.c_uint64(b).value + cin
Cout = (Sum >> 64) & 0b1
Sum &= 0xffffffffffffffff
return Sum, Cout
# Use Hypothesis to automatically generate test cases
@given(
a=st.integers(min_value=0, max_value=0xffffffffffffffff),
b=st.integers(min_value=0, max_value=0xffffffffffffffff),
cin=st.integers(min_value=0, max_value=1)
)
# Define test method, adder parameter injected by pytest via fixture
def test_full_adder_with_hypothesis(self, adder, a, b, cin):
# Calculate expected sum and carry
sum_expected, cout_expected = self.full_adder(a, b, cin)
# Set DUT inputs
adder.a.value = a
adder.b.value = b
adder.cin.value = cin
# Perform a clock step
adder.Step(1)
# Assert DUT outputs match expected results
assert sum_expected == adder.sum.value
assert cout_expected == adder.cout.value
if __name__ == "__main__":
# Run specified tests in verbose mode
pytest.main(['-v', 'test_adder.py::TestFullAdder'])
In this example, the @given decorator and strategies are used to generate random data that meets specified conditions. st.integers() is a strategy for generating integers within a specified range, used to generate numbers between 0 and 0xffffffffffffffff for a and b, and between 0 and 1 for cin. Hypothesis will automatically rerun this test multiple times, each time using different random inputs, helping reveal potential boundary conditions or edge cases.
- Run the tests, and the output will be as follows:
collected 1 item
test_adder.py ✓ 100% ██████████
Results (0.42s):
1 passed
As we can see, the tests were completed in a short amount of time.