In traditional chip verification practices, frameworks like UVM are widely adopted. Although they provide a comprehensive set of verification methodologies, they are typically confined to specific hardware description languages and simulation environments. Our tool breaks these limitations by converting simulation code into C++ or Python, allowing us to leverage software verification tools for more comprehensive testing. Given Python’s robust ecosystem, this project primarily uses Python as an example, briefly introducing two classic software testing frameworks: Pytest and Hypothesis. Pytest handles various testing needs with its simple syntax and rich features. Meanwhile, Hypothesis enhances the thoroughness and depth of testing by generating test cases that uncover unexpected edge cases. Our project is designed from the outset to be compatible with various modern software testing frameworks. We encourage you to explore the potential of these tools and apply them to your testing processes. Through hands-on practice, you will gain a deeper understanding of how these tools can enhance code quality and reliability. Let’s work together to improve the quality of chip development.
This is the multi-page printable view of this section. Click here to print.
Integrated Testing Framework
- 1: PyTest
- 2: Hypothesis
1 - PyTest
Software Testing
Before we start with pytest, let’s understand software testing. Software testing generally involves the following four aspects:
- Unit Testing: Also known as module testing, it involves checking the correctness of program modules, which are the smallest units in software design.
- Integration Testing: Also known as assembly testing, it usually builds on unit testing by sequentially and incrementally testing all program modules, focusing on the interface parts of different modules.
- System Testing: It treats the entire software system as a whole for testing, including testing the functionality, performance, and the software’s running environment.
- Acceptance Testing: Refers to testing the entire system according to the project task book, contract, and acceptance criteria agreed upon by both the supply and demand sides, to determine whether to accept or reject the system.
pytest was initially designed as a unit testing framework, but it also provides many features that allow it to be used for a wider range of testing, including integration testing and system testing. It is a very mature full-featured Python testing framework. It simplifies test writing and execution by collecting test functions and modules and providing a rich assertion library. It is a very mature and powerful Python testing framework with the following key features:
- Simple and Flexible: Pytest is easy to get started with and is flexible.
- Supports Parameterization: You can easily provide different parameters for test cases.
- Full-featured: Pytest not only supports simple unit testing but can also handle complex functional testing. You can even use it for automation testing, such as Selenium or Appium testing, as well as interface automation testing (combining Pytest with the Requests library).
- Rich Plugin Ecosystem: Pytest has many third-party plugins, and you can also customize extensions. Some commonly used plugins include:
pytest-selenium
: Integrates Selenium.pytest-html
: Generates HTML test reports.pytest-rerunfailures
: Repeats test cases in case of failure.pytest-xdist
: Supports multi-CPU distribution.
- Well Integrated with Jenkins.
- Supports Allure Report Framework.
This article will briefly introduce the usage of pytest based on testing requirements. The complete manual is available here for students to study in depth.
Installing Pytest
Using Pytest
Naming Convention
Pytest Parameters
pytest supports many parameters, which can be viewed using the help command.
Here are some commonly used ones:
-m: Specify multiple tag names with an expression. pytest provides a decorator @pytest.mark.xxx for marking tests and grouping them (xxx is the group name you defined), so you can quickly select and run them, with different groups separated by and or or.
-v: Outputs more detailed information during runtime. Without -v, the runtime does not display the specific test case names being run; with -v, it prints out the specific test cases in the console.
-q: Similar to the verbosity in unittest, used to simplify the runtime output. When running tests with -q, only simple runtime information is displayed, for example:
-k: You can run specified test cases using an expression. It is a fuzzy match, with and or or separating keywords, and the matching range includes file names, class names, and function names.
-x: Exit the test if one test case fails. This is very useful for debugging. When a test fails, stop running the subsequent tests.
-s: Display print content. When running test scripts, we often add some print content for debugging or printing some content. However, when running pytest, this content is not displayed. If you add -s, it will be displayed.
Selecting Test Cases to Execute with Pytest
In Pytest, you can select and execute test cases based on different dimensions such as test folders, test files, test classes, and test methods.
- Execute by test folder
In addition, Pytest also supports multiple ways to control the execution of test cases, such as filtering execution, running in multiple processes, retrying execution, etc.
Writing Validation with Pytest
- During testing, we use the previously validated adder. Go to the Adder folder, create a new test_adder.py file in the picker_out_adder directory, with the following content:
- After running the test, the output is as follows:
The successful test indicates that after 114514 loops, our device has not found any bugs for now. However, using randomly generated test cases with multiple loops consumes a considerable amount of resources, and these randomly generated test cases may not effectively cover all boundary conditions. In the next section, we will introduce a more efficient method for generating test cases.
2 - Hypothesis
Hypothesis
In the previous section, we manually wrote test cases and specified inputs and expected outputs for each case. This method has some issues, such as incomplete test case coverage and the tendency to overlook boundary conditions. Hypothesis is a Python library for property-based testing. Its main goal is to make testing simpler, faster, and more reliable. It uses a method called property-based testing, where you can write some hypotheses for your code, and Hypothesis will automatically generate test cases to verify these hypotheses. This makes it easier to write comprehensive and efficient tests. Hypothesis can automatically generate various types of input data, including basic types (e.g., integers, floats, strings), container types (e.g., lists, sets, dictionaries), and custom types. It tests based on the properties (assertions) you provide. If a test fails, it will try to narrow down the input data to find the smallest failing case. With Hypothesis, you can better cover the boundary conditions of your code and uncover errors you might not have considered. This helps improve the quality and reliability of your code.
Basic Concepts
- Test Function: The function or method to be tested.
- Properties: Conditions that the test function should satisfy. Properties are applied to the test function as decorators.
- Strategy: A generator for test data. Hypothesis provides a range of built-in strategies, such as integers, strings, lists, etc. You can also define custom strategies.
- Test Generator: A function that generates test data based on strategies. Hypothesis automatically generates test data and passes it as parameters to the test function.
This article will briefly introduce Hypothesis based on testing requirements. The complete manual is available for in-depth study.
Installation
Install with pip and import in Python to use:
Basic Usage
Properties and Strategies
Hypothesis uses property decorators to define the properties of test functions. The most common decorator is @given, which specifies the properties the test function should satisfy. We can define a test function test_addition using the @given decorator and add properties to x. The test generator will automatically generate test data for the function and pass it as parameters, for example:
In this example, integers() is a built-in strategy for generating integer test data. Hypothesis offers a variety of built-in strategies for generating different types of test data. Besides integers(), there are strategies for strings, booleans, lists, dictionaries, etc. For instance, using the text() strategy to generate string test data and using lists(text()) to generate lists of strings:
You can also define custom strategies to generate specific types of test data, for example, a strategy for non-negative integers:
Expectations
We can use expect to specify the expected result of a function:
Hypotheses and Assertions
When using Hypothesis for testing, we can use standard Python assertions to verify the properties of the test function. Hypothesis will automatically generate test data and run the test function based on the properties defined in the decorator. If an assertion fails, Hypothesis will try to narrow down the test data to find the smallest failing case.
Suppose we have a string reversal function. We can use an assert statement to check if reversing a string twice equals itself:
Writing Tests
-
Tests in Hypothesis consist of two parts: a function that looks like a regular test in your chosen framework but with some extra parameters, and a @given decorator specifying how to provide those parameters. Here’s an example of how to use it to verify a full adder, which we tested previously:
-
Based on the previous section’s code, we modify the method of generating test cases from random numbers to the integers() method. The modified code is as follows:
In this example, the @given decorator and strategies are used to generate random data that meets specified conditions. st.integers() is a strategy for generating integers within a specified range, used to generate numbers between 0 and 0xffffffffffffffff for a and b, and between 0 and 1 for cin. Hypothesis will automatically rerun this test multiple times, each time using different random inputs, helping reveal potential boundary conditions or edge cases.
- Run the tests, and the output will be as follows:
As we can see, the tests were completed in a short amount of time.