This project aims to perform unit testing (Unit Test, UT) verification of the XiangShan Processor Kunming Lake architecture through open-source crowdsourcing. The chart below shows the verification status of each module in the XiangShan Kunming Lake architecture.
In the above chart, there are a total of - modules. By default, modules are gray. When the number of test cases in a module exceeds -, the module is fully lit. Currently, - modules are fully lit, and - modules are yet to be lit.
Overview of General Processor Modules
High-performance processors are the core of modern computing devices. They usually consist of three main parts: the frontend, the backend, and the memory subsystem. These parts work together to ensure the processor can efficiently execute complex computational tasks.
Frontend: The frontend, also known as the instruction fetch and decode stage, is responsible for fetching instructions from memory and decoding them into a format the processor can understand. This stage is critical to processor performance because it directly affects how quickly the processor can start executing instructions. The frontend typically includes an instruction cache, branch predictor, and instruction decoder. The instruction cache stores recently accessed instructions to reduce accesses to main memory, thus improving speed. The branch predictor tries to predict conditional branches in the program to fetch and decode subsequent instructions in advance, reducing the time spent waiting for branch results.
Backend: The backend, also known as the execution stage, is where the processor actually executes instructions. This stage includes the Arithmetic Logic Unit (ALU), Floating Point Unit (FPU), and various execution units. These units handle arithmetic operations, logic operations, data transfers, and other processor operations. The backend design is usually very complex because it needs to support multiple instruction set architectures (ISA) and optimize performance. To improve efficiency, modern processors often use superscalar architectures, meaning they can execute multiple instructions simultaneously.
Memory Subsystem: The memory subsystem is the bridge between the processor and memory. It includes data caches, memory controllers, and cache coherence protocols. Data caches store data frequently accessed by the processor to reduce accesses to main memory. The memory controller manages data transfers between the processor and memory. Cache coherence protocols ensure that in multiprocessor systems, all processors see a consistent memory state.
Designing high-performance processors requires balancing these three parts to achieve optimal performance. This often involves complex microarchitecture design and pipeline optimization.
2 - Prepare Verification Environment
Basic Environment Requirements
This project uses the Python programming language for UT verification, with picker and toffee as the main tools and test frameworks. Environment requirements are as follows:
Linux operating system. It is recommended to install Ubuntu 22.04 under WSL2.
Python. Python 3.11 is recommended.
picker. Install the latest version as instructed in the Quick Start.
toffee. It will be installed automatically later. You can also manually install the latest version as instructed in the Quick Start.
lcov. Used for report generation in the test stage. Install via package manager: sudo apt install lcov
The naming convention for RTL archives is: name-microarchitecture-GitTag-date.tar.gz, for example, openxiangshan-kmh-97e37a2237-24092701.tar.gz. When used, the repository code will filter out the git tag and suffix, so the version accessed via cfg.rtl.version is: openxiangshan-kmh-24092701. The directory structure inside the archive is:
openxiangshan-kmh-97e37a2237-24092701.tar.gz
└── rtl # directory|-- *.sv # all sv files`-- *.v # all v files
Compile DUT
The purpose of this process is to package the RTL into a Python module using the picker tool. You can specify the DUT to be packaged via the make command, or package all DUTs at once.
If you want to package a specific dut yourself, you need to create a script named build_ut_.py in the scripts directory. This script must implement a build method, which will be called automatically during packaging. There is also a line_coverage_files method for specifying files used for line coverage reference.
Picker’s packaging supports adding internal signals; See the –internal parameter of picker and pass a custom yaml.
# Calls the build method in scripts/build_ut_<name>.py to create the Python DUT to be verifiedmake dut DUTS=<name> # If there are multiple DUTS, separate them with commas. Wildcards are supported. The default value is "*", which compiles all DUTs.# Example:make dut DUTS=backend_ctrl_block_decode
For example, after running make dut DUTS=backend_ctrl_block_decode, the corresponding Python package will be generated in the dut directory:
When running rtl, dut, test, and other commands, the default configuration is used from configs/_default.yaml.
Of course, you can also use a custom configuration as follows:
# Specify a custom CFG filemake CFG=path/to/your_cfg.yaml
Similarly, you can specify key-value pairs directly on the command line. Currently, only the test-related stage supports command-line configuration key-value pairs:
# Specify KV, pass command-line arguments, separate key-value pairs with spacesmake testKV="log.term-level='debug' test.skip-tags=['RARELY_USED']"
3 - Run Tests
This project uses the PyTest testing framework for verification. When running tests, the PyTest framework automatically searches for all test_*.py files and executes all test cases that start with test_.
# Run all test cases in ut_* directoriesmake test_all
# Run test cases in the specified directorymake testtarget=<dir>
# For example, run all test cases in the ut_backend/ctrl_block/decode directorymake testtarget=ut_backend/ctrl_block/decode
You can pass Pytest runtime parameters via the args parameter, such as enabling the x-dist plugin for multi-core execution:
make testargs="-n 4"# Use 4 processesmake testargs="-n auto"# Let the framework automatically choose the number of processes
*Note: x-dist can run tests concurrently on multiple nodes. See its documentation for details.
After running, an HTML version of the test report will be generated by default in the out/report directory. The HTML file can be opened directly in a browser (it is recommended to install the Open In Default Browser plugin in VS Code IDE).
Running tests mainly completes the following three parts:
Run Test Cases as required, which can be configured via options in cfg.tests
Collect test results and output test reports. The toffee-report tool automatically generates (a total test report, merging results of all tests)
Further data statistics on the test report as needed (cfg.doc_result.disable = True)
4 - Add Test
To add a brand-new DUT test case, the following three steps need to be completed (this section uses the rvc_expander under the frontend ifu as an example):
Add a compilation script: Write a compilation file for the corresponding rtl in the scripts directory using python (e.g., build_ut_frontend_ifu_rvc_expander.py).
Build the test environment: Create the target test UT directory in the appropriate location (e.g., ut_frontend/ifu/rvc_expander). If necessary, add the basic tools required for the DUT test in modules such as tools or comm.
Add test cases: Add test cases in the UT directory following the PyTest specification.
If you are adding content to an existing DUT test, simply follow the original directory structure.
For information on how to perform Python chip verification using the picker and toffee libraries, refer to: https://open-verify.cc/mlvp/docs
When testing, you also need to pay attention to the following:
UT Module Description: Add a README.md file in the top-level folder of the added module to provide an explanation. For specific formats and requirements, refer to the template.
Code Coverage: Code coverage is an important metric for chip verification. Generally, all code of the target DUT needs to be covered.
Functional Coverage: Functional coverage indicates how much of the target functionality has been verified. It usually needs to reach 100%.
In subsequent documentation, we will continue to use the rvc_expander module as an example to explain the above process in detail.
*Note: Directory or file names should be reasonable so that their specific meaning can be inferred from the naming.
4.1 - Add Compilation Script
Script Target
Write a compilation file for the corresponding RTL in the scripts directory using Python (e.g., build_ut_frontend_ifu_rvc_expander.py).
The goal of this script is to provide RTL-to-Python DUT compilation, target coverage files, and custom functionality.
Creation Process
Determine File Name
Select the UT to be verified in XiangShan Kunming Lake DUT Verification Progress. If it is not available or needs further refinement, you can manually add it by editing configs/dutree/xiangshan-kmh.yaml.
For example, if we want to verify the rvc_expander module under the ifu module in the frontend, we need to add the corresponding part to configs/dutree/xiangshan-kmh.yaml (this module already exists in the YAML file; this is just an example):
Currently, the project includes four top-level modules:
ut_frontend (Frontend)
ut_backend (Backend)
ut_mem_block (Memory Access)
ut_misc (Miscellaneous)
Submodules do not have the ut_ prefix (the top-level directories have this prefix to distinguish them from other directories).
For example, if the target DUT to be verified is the rvc_expander module:
This module belongs to the frontend, so the top-level module is ut_frontend. Its submodule is ifu, and the target module is rvc_expander.
From the previously opened yaml file, we can also see that the children of frontend is ifu, and the children of ifu is rvc_expander.
Thus, the script name to be created is build_ut_frontend_ifu_rvc_expander.py.
Write the build(cfg) -> bool Function
The build function is defined as follows:
defbuild(cfg)->bool:"""Compile DUT
Args:
cfg: Runtime configuration, which can be used to access configuration items, e.g., cfg.rtl.version
Return:
Returns True or False, indicating whether the function achieved its intended goal
"""
The build function is called during make dut. Its main purpose is to convert the target RTL into a Python module. Other necessary processes, such as compiling dependencies, can also be added. For example, in build_ut_frontend_ifu_rvc_expander.py, the function primarily performs RTL checks, DUT checks, RTL compilation, and disasm dependency compilation:
importosfromcommimportwarning,infodefbuild(cfg):# Import related dependenciesfromtoffee_test.markersimportmatch_versionfromcommimportis_all_file_exist,get_rtl_dir,exe_cmd,get_root_dir# Check RTL version (an empty version parameter means all versions are supported)ifnotmatch_version(cfg.rtl.version,"openxiangshan-kmh-*"):warning("ifu frontend rvc expander: %s"%f"Unsupported RTL version {cfg.rtl.version}")returnFalse# Check if the target file exists in the current RTLf=is_all_file_exist(["rtl/RVCExpander.sv"],get_rtl_dir(cfg=cfg))assertfisTrue,f"File {f} not found"# If the DUT does not contain RVCExpander, use picker to package it into Pythonifnotos.path.exists(get_root_dir("dut/RVCExpander")):info("Exporting RVCExpander.sv")s,out,err=exe_cmd(f'picker export --cp_lib false {get_rtl_dir("rtl/RVCExpander.sv",cfg=cfg)} --lang python --tdir {get_root_dir("dut")}/ -w rvc.fst -c')asserts,"Failed to export RVCExpander.sv: %s\n%s"%(out,err)# If disasm/build does not exist in tools, compile disasmifnotos.path.exists(get_root_dir("tools/disasm/build")):info("Building disasm")s,_,_=exe_cmd("make -C %s"%get_root_dir("tools/disasm"))asserts,"Failed to build disasm"# Compilation successfulreturnTruedefline_coverage_files(cfg):return["RVCExpander.v"]
In the scripts directory, you can create subdirectories to store files needed for UT verification. For example, the rvc_expander module creates a scripts/frontend_ifu_rvc_expander directory, where rtl_file.f specifies the input RTL file, and line_coverage.ignore stores lines of code to be ignored in coverage statistics. Custom directory names should be reasonable and should indicate the module and file they belong to.
Write the line_coverage_files(cfg) -> list[str] Function
The line_coverage_files function is defined as follows:
defline_coverage_files(cfg)->list[str]:"""Specify files to be covered
Args:
cfg: Runtime configuration, which can be used to access configuration items, e.g., cfg.rtl.version
Return:
Returns the names of RTL files targeted for line coverage statistics
"""
In the build_ut_frontend_ifu_rvc_expander.py file, the line_coverage_files function is defined as follows:
This indicates that the module focuses on coverage for the RVCExpander.v file. If you want to enable test result processing, set disable=False under doc-result in configs/_default.yaml (the default parameter is False, meaning it is enabled). If you do not enable test result processing (disable=True), the above function will not be called.
4.2 - Build Test Environment
Determine Directory Structure
The directory structure of the Unit Test (UT) should match its naming convention. For example, frontend.ifu.rvc_expander should be located in the ut_frontend/ifu/rvc_expander directory, and each directory level must include an __init__.py file to enable Python imports.
The file for this chapter is your_module_wrapper.py (if your module is rvc_expander, the file would be rvc_expander_wrapper.py).
A wrapper is essentially a layer of abstraction that encapsulates the methods needed for testing into APIs decoupled from the DUT. These APIs are then used in test cases.
*Note: Decoupling ensures that test cases are independent of the DUT, allowing them to be written and debugged without needing to know the DUT’s implementation details. For more information, refer to Decoupling Verification Code from the DUT.
This file should be placed in the ut_frontend_or_backend/top_module/your_module/env directory. For example, if rvc_expander belongs to the frontend, its top-level directory should be ut_frontend. The next-level directory would be ifu, followed by rvc_expander. Since we are building the test environment, an additional env directory is created. The full path would be: ut_frontend_or_backend/top_module/your_module/env.
In the rvc_expander directory, there are two versions: classical_version (traditional) and toffee_version (using Toffee).
The traditional version uses the pytest framework for testing, while the Toffee version leverages more features of the Toffee framework.
In general, the traditional version is sufficient for most cases, and the Toffee version is only needed when the traditional version cannot meet the requirements.
When building the test environment, choose one version.
The directory structure within a module (e.g., rvc_expander) is determined by the contributor. You do not need to create additional classical_version or toffee_version directories, but the structure must comply with Python standards and be logically and consistently named.
Env Requirements
Perform RTL version checks.
The APIs provided by Env must be independent of pins and timing.
The APIs provided by Env must be stable and should not undergo arbitrary changes in interfaces or return values.
Define necessary fixtures.
Initialize functional checkpoints (functional checkpoints can be independent modules).
Perform coverage statistics.
Include documentation.
Building the Test Environment: Traditional Version
In the test environment for the UT verification module, the goal is to accomplish the following:
Encapsulate DUT functionality to provide stable APIs for testing.
Define functional coverage.
Define necessary fixtures for test cases.
Collect coverage statistics at appropriate times.
Taking the RVCExpander in the IFU environment as an example (ut_frontend/ifu/rvc_expander/classical_version/env/rvc_expander_wrapper.py):
1. DUT Encapsulation
The following content is located in ut_frontend/ifu/rvc_expander/classical_version/env/rvc_expander_wrapper.py.
classRVCExpander(toffee.Bundle):def__init__(self,cover_group,**kwargs):super().__init__()self.cover_group=cover_groupself.dut=DUTRVCExpander(**kwargs)# Create DUTself.io=toffee.Bundle.from_prefix("io_",self.dut)# Bind pins using Bundle and prefixself.bind(self.dut)# Bind Bundle to DUTdefexpand(self,instr,fsIsOff):self.io["in"].value=instr# Assign value to DUT pinself.io["fsIsOff"].value=fsIsOff# Assign value to DUT pinself.dut.RefreshComb()# Trigger combinational logicself.cover_group.sample()# Collect functional coverage statisticsreturnself.io["out_bits"].value,self.io["ill"].value# Return result and illegal instruction flagdefstat(self):# Get current statereturn{"instr":self.io["in"].value,# Input instruction"decode":self.io["out_bits"].value,# Decoded result"illegal":self.io["ill"].value!=0,# Whether the input is illegal}
In the example above, class RVCExpander encapsulates DUTRVCExpander and provides two APIs:
expand(instr: int, fsIsOff: bool) -> (int, int): Accepts an input instruction instr for decoding and returns (result, illegal instruction flag). If the illegal instruction flag is non-zero, the input instruction is illegal.
stat() -> dict(instr, decode, illegal): Returns the current state, including the input instruction, decoded result, and illegal instruction flag.
These APIs abstract away the DUT’s pins, exposing only general functionality to external programs.
2. Define Functional Coverage
Define functional coverage in the environment whenever possible. If necessary, coverage can also be defined in test cases. For details on defining functional coverage with Toffee, refer to What is Functional Coverage. To establish a clear relationship between functional checkpoints and test cases, functional coverage definitions should be linked to test cases (reverse marking).
The following content is located in ut_frontend/ifu/rvc_expander/classical_version/env/rvc_expander_wrapper.py.
importtoffee.funcovasfc# Create a functional coverage groupg=fc.CovGroup(UT_FCOV("../../../CLASSIC"))definit_rvc_expander_funcov(expander,g:fc.CovGroup):"""Add watch points to the RVCExpander module to collect functional coverage information"""# 1. Add point RVC_EXPAND_RET to check expander return value:# - bin ERROR: The instruction is not illegal# - bin SUCCE: The instruction is not expandedg.add_watch_point(expander,{"ERROR":lambdax:x.stat()["illegal"]==False,"SUCCE":lambdax:x.stat()["illegal"]!=False,},name="RVC_EXPAND_RET")...# 5. Reverse mark functional coverage to the checkpointdef_M(name):# Get the module namereturnmodule_name_with(name,"../../test_rv_decode")# - Mark RVC_EXPAND_RETg.mark_function("RVC_EXPAND_RET",_M(["test_rvc_expand_16bit_full","test_rvc_expand_32bit_full","test_rvc_expand_32bit_randomN"]),bin_name=["ERROR","SUCCE"])...
In the code above, a functional checkpoint named RVC_EXPAND_RET is added to check whether the RVCExpander module can return illegal instructions. The checkpoint requires both ERROR and SUCCE conditions to be met, meaning the illegal field in stat() must have both True and False values. After defining the checkpoint, the mark_function method is used to link it to the relevant test cases.
3. Define Necessary Fixtures
The following content is located in ut_frontend/ifu/rvc_expander/classical_version/env/rvc_expander_wrapper.py.
version_check=get_version_checker("openxiangshan-kmh-*")# Specify the required RTL version@pytest.fixture()defrvc_expander(request):version_check()# Perform version checkfname=request.node.name# Get the name of the test case using this fixturewave_file=get_out_dir("decoder/rvc_expander_%s.fst"%fname)# Set waveform file pathcoverage_file=get_out_dir("decoder/rvc_expander_%s.dat"%fname)# Set code coverage file pathcoverage_dir=os.path.dirname(coverage_file)os.makedirs(coverage_dir,exist_ok=True)# Create directory if it doesn't existexpander=RVCExpander(g,coverage_filename=coverage_file,waveform_filename=wave_file)# Create RVCExpanderexpander.dut.io_in.AsImmWrite()# Set immediate write timing for io_in pinexpander.dut.io_fsIsOff.AsImmWrite()# Set immediate write timing for io_fsIsOff pininit_rvc_expander_funcov(expander,g)# Initialize functional checkpointsyieldexpander# Return the created RVCExpander to the test caseexpander.dut.Finish()# End DUT after the test case is executedset_line_coverage(request,coverage_file)# Report code coverage file to toffee-reportset_func_coverage(request,g)# Report functional coverage data to toffee-reportg.clear()# Clear functional coverage statistics
This fixture accomplishes the following:
Performs RTL version checks. If the version does not meet the "openxiangshan-kmh-*" requirement, the test case using this fixture is skipped.
Creates the DUT and specifies the paths for waveform and code coverage files (the paths include the name of the test case using the fixture: fname).
Calls init_rvc_expander_funcov to add functional coverage points.
Ends the DUT and processes code and functional coverage (sending them to toffee-report for processing).
Clears functional coverage statistics.
*Note: In PyTest, before executing a test case like test_A(rvc_expander, ...), (rvc_expander is the method name we defined when we used the fixure decorator), the part of rvc_expander(request) before the yield keyword will be automatically called and executed (which is equivalent to initialization). and then rvc_expander will be returned to call the test_A case via yield (the object returned by yield is the method name we defined in our fixture of the test case). After the execution of the case is completed, then continue to execute the part of the fixture after the field keyword. For example: refer to the following code of statistical coverage, the penultimate line of rvc_expand(rvc_expander, generate_rvc_instructions(start, end)), where rvc_expander is the name of the method that we defined in the fixture, that is, the yield return object.
4. Collect Coverage Statistics
The following content is located in ut_frontend/ifu/rvc_expander/classical_version/test_rvc_expander.py.
N=10T=1<<16@pytest.mark.toffee_tags(TAG_LONG_TIME_RUN)@pytest.mark.parametrize("start,end",[(r*(T//N),(r+1)*(T//N)ifr<N-1elseT)forrinrange(N)])deftest_rvc_expand_16bit_full(rvc_expander,start,end):"""Test the RVC expand function with a full compressed instruction set
Description:
Perform an expand check on 16-bit compressed instructions within the range from 'start' to 'end'.
"""# Add checkpoint: RVC_EXPAND_RANGE to check expander input range.# When run to here, the range[start, end] is coveredcovered=-1g.add_watch_point(rvc_expander,{"RANGE[%d-%d]"%(start,end):lambda_:covered==end},name="RVC_EXPAND_ALL_16B",dynamic_bin=True)# Reverse mark function to the checkpointg.mark_function("RVC_EXPAND_ALL_16B",test_rvc_expand_16bit_full,bin_name="RANGE[%d-%d]"%(start,end))# Drive the expander and check the resultrvc_expand(rvc_expander,generate_rvc_instructions(start,end))# When go to here, the range[start, end] is coveredcovered=endg.sample()# Sample coverage
After defining coverage, it must be collected in the test cases. In the code above, a functional checkpoint rvc_expander is added in the test case using add_watch_point. The checkpoint is then marked and sampled. Coverage sampling triggers a callback function to evaluate the bins defined in add_watch_point. If any bins’s condition evaluates to True, it is counted as a pass.
Building the Test Environment: Toffee Version
Testing with Python can be enhanced by using our open-source testing framework Toffee.
Toffee uses Bundles to bind to DUTs. It provides multiple methods for establishing Bundle-to-DUT bindings. Relevant code can be found in ut_frontend/ifu/rvc_expander/toffee_version/bundle.
Manual Binding
In the Toffee framework, the lowest-level class supporting pin binding is Signal, which binds to DUT pins using name matching. For example, consider the simplest RVCExpander with the following I/O pins:
There are four signals: io_in, io_fsIsOff, io_out_bits, and io_ill. A common prefix, such as io_, can be extracted (note that in cannot be used directly as a variable name in Python). The remaining parts can be defined as pin names in the corresponding Bundle class:
The Bundle class definition can also be omitted by using prefix binding:
self.io=toffee.Bundle.from_prefix("io_",self.dut)# Bind pins using Bundle and prefixself.bind(self.dut)
If the from_prefix method is passed a DUT, it automatically generates pin definitions based on the prefix and DUT pin names. Accessing the pins can then be done using a dictionary-like approach:
These methods generate Bundle code by passing in a DUT and generation rules (dict, prefix, or regex).
The bundle_code_intel_gen.py script parses the signals.json file generated by Picker to automatically generate hierarchical Bundle code. It can be invoked from the command line:
python bundle_code_intel_gen.py [signal][target]
If you encounter bugs in the auto-generation scripts, feel free to submit an issue for us to fix.
Agent: Driving Methods
If Bundles abstract the data responsibilities of a DUT, Agents encapsulate its behavioral responsibilities into interfaces. Simply put, an Agent provides multiple methods that abstract groups of I/O operations into specific behaviors:
classRVCExpanderAgent(Agent):def__init__(self,bundle:RVCExpanderIOBundle):super().__init__(bundle)self.bundle=bundle@driver_method()asyncdefexpand(self,instr,fsIsOff):# Accepts RVC instruction and fs.status enable flagself.bundle._in.value=instr# Assign value to pinself.bundle._fsIsOff.value=fsIsOff# Assign value to pinawaitself.bundle.step()# Trigger clockreturnself.bundle._out_bits.value,# Return expanded instructionself.bundle._ill.value# Return legality check
For example, the RVCExpander’s instruction expansion function accepts an input instruction (which could be an RVI or RVC instruction) and the CSR’s enable flag for fs.status. This functionality is abstracted into the expand method, which takes two parameters in addition to self. The method returns the corresponding RVI instruction and a legality check for the input instruction.
Env: Test Environment
classRVCExpanderEnv(Env):def__init__(self,dut:DUTRVCExpander):super().__init__()dut.io_in.xdata.AsImmWrite()dut.io_fsIsOff.xdata.AsImmWrite()# Set pin write timingself.agent=RVCExpanderAgent(RVCExpanderIOBundle.from_prefix("io").bind(dut))# Complete prefix and bind DUT
Coverage Definition
The method for defining coverage groups is similar to the one described earlier and will not be repeated here.
Due to Toffee’s more powerful coverage management features, manual line coverage settings are not needed. Additionally, because of Toffee’s clock mechanism, it is recommended to check if all tasks have ended at the end of the suite code.
4.3 - Add Test Cases
Naming Requirements
All test case files should be named in the format test_*.py, where * is replaced with the test target (e.g., test_rvc_expander.py). All test cases should also start with the test_ prefix. The test case names must have clear and meaningful descriptions.
Examples of naming:
deftest_a():# Not acceptable, as "a" does not indicate the test targetpassdeftest_rvc_expand_16bit_full():# Acceptable, as the name indicates the test contentpass
Using Assert
Each test case must use assert to determine whether the test passes. pytest relies on the results of assert statements, so these statements must ensure correctness.
The following content is located in ut_frontend/ifu/rvc_expander/classical_version/test_rvc_expander.py:
defrvc_expand(rvc_expander,ref_insts,is_32bit=False,fsIsOff=False):"""Compare the RVC expand result with the reference
Args:
rvc_expander (wrapper): the fixture of the RVC expander
ref_insts (list[int]): the reference instruction list
"""find_error=0forinsninref_insts:insn_disasm=disasmbly(insn)value,instr_ex=rvc_expander.expand(insn,fsIsOff)ifis_32bit:assertvalue==insn,"RVC expand error, 32-bit instruction must remain unchanged"if(insn_disasm=="unknown")and(instr_ex==0):debug(f"Found bad instruction: {insn}, ref: 1, dut: 0")find_error+=1elif(insn_disasm!="unknown")and(instr_ex==1):if(instr_filter(insn_disasm)!=1):debug(f"Found bad instruction: {insn}, disasm: {insn_disasm}, ref: 0, dut: 1")find_error+=1assertfind_error==0,f"RVC expand error ({find_error} errors)"
deftest_<name>(a:type_a,b:type_b):"""Test abstract
Args:
a (type_a): Description of argument a.
b (type_b): Description of argument b.
Detailed test description here (if needed).
"""...
Test Case Management
To facilitate test case management, use the @pytest.mark.toffee_tags tag feature provided by toffee-test. Refer to the Other section of this site and the toffee-test documentation.
Reference Test Cases
If many test cases share the same operations, the common parts can be extracted into a utility function. For example, in RVCExpander verification, the comparison of compressed instruction expansion with the reference model (disasm) can be encapsulated into the following function:
The following content is located in ut_frontend/ifu/rvc_expander/classical_version/test_rvc_expander.py:
defrvc_expand(rvc_expander,ref_insts,is_32bit=False,fsIsOff=False):"""Compare the RVC expand result with the reference
Args:
rvc_expander (wrapper): the fixture of the RVC expander
ref_insts (list[int]): the reference instruction list
"""find_error=0forinsninref_insts:insn_disasm=disasmbly(insn)value,instr_ex=rvc_expander.expand(insn,fsIsOff)ifis_32bit:assertvalue==insn,"RVC expand error, 32-bit instruction must remain unchanged"if(insn_disasm=="unknown")and(instr_ex==0):debug(f"Found bad instruction: {insn}, ref: 1, dut: 0")find_error+=1elif(insn_disasm!="unknown")and(instr_ex==1):if(instr_filter(insn_disasm)!=1):debug(f"Found bad instruction: {insn}, disasm: {insn_disasm}, ref: 0, dut: 1")find_error+=1assertfind_error==0,f"RVC expand error ({find_error} errors)"
The above utility function includes assert statements, so the test cases calling this function can also rely on these assertions to determine the results.
During test case development, debugging is often required. To quickly set up the verification environment, “smoke tests” can be written for debugging. For example, a smoke test for expanding 16-bit compressed instructions in RVCExpander is as follows:
@pytest.mark.toffee_tags(TAG_SMOKE)deftest_rvc_expand_16bit_smoke(rvc_expander):"""Test the RVC expand function with 1 compressed instruction"""rvc_expand(rvc_expander,generate_rvc_instructions(start=100,end=101))
For easier management, the above test case is tagged with the SMOKE label using toffee_tags. Its input parameter is rvc_expander, which will automatically invoke the corresponding fixture with the same name during runtime.
The goal of testing 16-bit compressed instructions in RVCExpander is to traverse all 2^16 compressed instructions and verify that all cases match the reference model (disasm). If a single test is used for traversal, it would take a significant amount of time. To address this, we can use pytest’s parametrize feature to configure test parameters and execute them in parallel using the pytest-xdist plugin:
The following content is located in ut_frontend/ifu/rvc_expander/classical_version/test_rvc_expander.py:
N=10T=1<<16@pytest.mark.toffee_tags(TAG_LONG_TIME_RUN)@pytest.mark.parametrize("start,end",[(r*(T//N),(r+1)*(T//N)ifr<N-1elseT)forrinrange(N)])deftest_rvc_expand_16bit_full(rvc_expander,start,end):"""Test the RVC expand function with a full compressed instruction set
Description:
Perform an expand check on 16-bit compressed instructions within the range from 'start' to 'end'.
"""# Add checkpoint: RVC_EXPAND_RANGE to check expander input range.# When run to here, the range [start, end] is coveredg.add_watch_point(rvc_expander,{"RANGE[%d-%d]"%(start,end):lambda_:True},name="RVC_EXPAND_ALL_16B").sample()# Reverse mark function to the checkpointg.mark_function("RVC_EXPAND_ALL_16B",test_rvc_expand_16bit_full,bin_name="RANGE[%d-%d]"%(start,end))# Drive the expander and check the resultrvc_expand(rvc_expander,generate_rvc_instructions(start,end))
In the above test case, the parameters start and end are defined to specify the range of compressed instructions. These parameters are grouped and assigned using the @pytest.mark.parametrize decorator. The variable N specifies the number of groups for the target data, with a default of 10 groups. During runtime, the test case test_rvc_expand_16bit_full will expand into 10 test cases, such as test_rvc_expand_16bit_full[0-6553] to test_rvc_expand_16bit_full[58977-65536].
4.4 - Code Coverage
Code coverage is a metric that measures which parts of the tested code have been executed and which parts have not. By analyzing code coverage, the effectiveness and thoroughness of testing can be evaluated.
Code coverage includes:
Line Coverage: The number of lines executed in the tested code. This is the simplest metric, and the goal is usually 100%.
Branch Coverage: Whether each branch of every control structure has been executed. For example, in an if statement, have both the true and false branches been executed?
FSM Coverage: Whether all states of a finite state machine have been reached.
Toggle Coverage: Tracks the toggling of signals in the tested code, ensuring that every circuit node has both 0 -> 1 and 1 -> 0 transitions.
Path Coverage: Examines the coverage of paths. In always or initial blocks, if ... else and case statements can create various data paths in the circuit structure.
* The primary simulator used in this project is Verilator, with a focus on line coverage. Verilator supports coverage statistics, so when building the DUT, the -c option must be added to the compilation options to enable coverage statistics.
Relevant Locations in This Project
To enable coverage, the -c option must be added during compilation (when using the picker command). Refer to the Picker Parameter Explanation. Additionally, the line coverage function must be implemented and enabled in the test files to generate coverage statistics during Toffee testing.
In conjunction with the above description, code coverage will be involved when compiling, writing and enabling line coverage functions and tests in this project:
Write the line_coverage_files(cfg) -> list[str] function as needed, and enable test result processing (doc_result.disable = False) to ensure it is invoked.
set_line_coverage(request,coverage_file)# Pass the generated code coverage file to toffee-report
Use the toffee-test.set_line_coverage function to pass the coverage file to Toffee-Test, enabling it to collect data for generating reports with line coverage.
Ignoring Specific Statistics
Sometimes, certain parts of the code may need to be excluded from coverage statistics. For example, some parts may not need to be tested, or it may be normal for certain parts to remain uncovered. Ignoring these parts can help optimize coverage reports or assist in debugging. Our framework supports two methods for ignoring coverage:
1. Using Verilator to Specify Ignored Sections
Using verilator_coverage_off/on Directives
Verilator supports ignoring specific code sections from coverage statistics using comment directives. For example:
During coverage statistics, the line_coverage.ignore file in the scripts/frontend_ifu_rvc_expander directory will be searched, and its wildcard patterns will be used for filtering.
# Line coverage ignore file
# Ignore Top file
*/RVCExpander_top*%
The above file indicates that files containing the keyword RVCExpander_top will be ignored during coverage statistics (the corresponding data is collected but excluded from the final report).
Now, Run Tests. Afterward, an HTML version of the test report will be generated in the out/report directory by default.
You can also view the statistics results by selecting the corresponding test report (named by test time) under “Current Version” in the Progress Overview section and clicking the link on the right.
4.5 - Functional Coverage
Functional Coverage is a user-defined metric used to measure the proportion of design specifications executed during verification. Functional coverage focuses on whether the features and functionalities of the design have been covered by the test cases.
Mapping refers to associating functional points with test cases. This allows you to see which test cases correspond to each functional point during statistics, making it easier to identify which functional points have more test cases and which have fewer. This helps optimize test cases in the later stages.
Relevant Locations in This Project
Functional coverage must be defined before it can be collected, primarily during the process of building the test environment.
Functional points can also be written in each test case for use in test cases.
Functional Coverage Workflow
Specify Group Name
The test report matches the Group name with the DUT name. Use comm.UT_FCOV to obtain the DUT prefix. For example, in the Python module ut_frontend/ifu/rvc_expander/classical_version/env/rvc_expander_wrapper.py, the following call is made:
fromcommimportUT_FCOV# Module name: ut_frontend.ifu.rvc_expander.classical_version.env.rvc_expander_wrapper# Remove classical_version and the parent module env, rvc_expander_wrapper using ../../../# UT_FCOV will automatically remove the prefix ut_g=fc.CovGroup(UT_FCOV("../../../CLASSIC"))# name = UT_FCOV("../../../CLASSIC")
The value of name is frontend.ifu.rvc_expander.CLASSIC. When collecting the final results, the longest prefix will be matched to the target UT (i.e., matched to the frontend.ifu.rvc_expander module).
Create Coverage Group
Use toffee’s funcov to create a coverage group.
importtoffee.funcovasfc# Use the GROUP name specified aboveg=fc.CovGroup(name)
These two steps can also be combined into one: g = fc.CovGroup(UT_FCOV("../../../CLASSIC")).
The created g object represents a functional coverage group, which can be used to provide watch points and mappings.
Add Watch Points and Mappings
Inside each test case, you can use add_watch_point (or its alias add_cover_point, which is identical) to add watch points and mark_function to add mappings.
A watch point is triggered when the signal meets the conditions defined in the watch point, and its name (i.e., the functional point) will be recorded in the functional coverage.
A mapping associates functional points with test cases, allowing you to see which test cases correspond to each functional point during statistics.
The location of the watch point depends on the actual situation. Generally, adding watch points outside the test case is acceptable. However, sometimes more flexibility is required.
Outside the test case (in decode_wrapper.py):
definit_rvc_expander_funcov(expander,g:fc.CovGroup):"""Add watch points to the RVCExpander module to collect functional coverage information"""# 1. Add point RVC_EXPAND_RET to check expander return value:# - bin ERROR: The instruction is not illegal# - bin SUCCE: The instruction is not expandedg.add_watch_point(expander,{"ERROR":lambdax:x.stat()["ilegal"]==False,"SUCCE":lambdax:x.stat()["ilegal"]!=False,},name="RVC_EXPAND_RET")# 5. Reverse mark function coverage to the check pointdef_M(name):# Get the module namereturnmodule_name_with(name,"../../test_rv_decode")# - mark RVC_EXPAND_RETg.mark_function("RVC_EXPAND_RET",_M(["test_rvc_expand_16bit_full","test_rvc_expand_32bit_full","test_rvc_expand_32bit_randomN"]),bin_name=["ERROR","SUCCE"])# The EndreturnNone
In this example, the first g.add_watch_point is placed outside the test case because it is not directly related to the existing test cases. Placing it outside the test case is more convenient. Once the conditions in the bins of the add_watch_point method are triggered, the toffee-test framework will collect the corresponding functional points.
Inside the test case (in test_rvc_expander.py):
N=10T=1<<32@pytest.mark.toffee_tags([TAG_LONG_TIME_RUN,TAG_RARELY_USED])@pytest.mark.parametrize("start,end",[(r*(T//N),(r+1)*(T//N)ifr<N-1elseT)forrinrange(N)])deftest_rvc_expand_32bit_full(rvc_expander,start,end):"""Test the RVC expand function with a full 32-bit instruction set
Description:
Randomly generate N 32-bit instructions for each check, and repeat the process K times.
"""# Add check point: RVC_EXPAND_ALL_32B to check instr bits.covered=-1g.add_watch_point(rvc_expander,{"RANGE[%d-%d]"%(start,end):lambda_:covered==end},name="RVC_EXPAND_ALL_32B",dynamic_bin=True)# Reverse mark function to the check pointg.mark_function("RVC_EXPAND_ALL_32B",test_rvc_expand_32bit_full)# Drive the expander and check the resultrvc_expand(rvc_expander,list([_for_inrange(start,end)]))# When reaching here, the range [start, end] is coveredcovered=endg.sample()
In this example, the watch point is inside the test case because start and end are determined by pytest.mark.parametrize. Since the values are not fixed, the watch point needs to be added inside the test case.
Sampling
At the end of the previous example, we called g.sample(). This function notifies toffee-test that the bins in add_watch_point have been executed. If the conditions are met, the watch point is recorded as a pass.
There is also an automatic sampling option. During the test environment setup, you can add StepRis(lambda x: g.sample()) in the fixture definition. This will automatically sample at the rising edge of each clock cycle.
The following content is from ut_backend/ctrl_block/decode/env/decode_wrapper.py:
@pytest.fixture()defdecoder(request):# Before testinit_rv_decoder_funcov(g)func_name=request.node.name# If the output directory does not exist, create itoutput_dir_path=get_out_dir("decoder/log")os.makedirs(output_dir_path,exist_ok=True)decoder=Decode(DUTDecodeStage(waveform_filename=get_out_dir("decoder/decode_%s.fst"%func_name),coverage_filename=get_out_dir("decoder/decode_%s.dat"%func_name),))decoder.dut.InitClock("clock")decoder.dut.StepRis(lambdax:g.sample())yielddecoder# After testdecoder.dut.Finish()coverage_file=get_out_dir("decoder/decode_%s.dat"%func_name)ifnotos.path.exists(coverage_file):raiseFileNotFoundError(f"File not found: {coverage_file}")set_line_coverage(request,coverage_file,get_root_dir("scripts/backend_ctrlblock_decode"))set_func_coverage(request,g)g.clear()
As shown above, we call g.sample() before yield, enabling automatic sampling at the rising edge of each clock cycle.
The StepRis function executes the passed function at the rising edge of each clock cycle. For more details, refer to the Picker Usage Guide.
5 - How to Participate in This Project
How to Submit a Bug
Submit according to the ISSUE template and mark the corresponding labels (bug, bug level, etc.).
The maintainer of the corresponding module will check and modify the labels and XiangShan branch as needed.
This project welcomes anyone to participate via ISSUE, DISCUSS, Fork, or PR.
WanZhongYiXin QQ Group:
6 - Template-PR
# Description
Please include a summary of the changes and the related issue.
Please also include relevant motivation and context.
List any dependencies that are required for this change.
Fixes # (issue)
## Type of change
Please delete options that are not relevant.
- [ ] Bug fix (non-breaking change which fixes an issue)
- [ ] New feature (non-breaking change which adds functionality)
- [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
- [ ] This change requires a documentation update
# How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce.
Please also list any relevant details for your test configuration
- [ ] Test A
- [x] Test B
**Test Configuration**:
* Firmware version:
* Hardware:
* Toolchain:
* SDK:
# Checklist:
- [ ] My code follows the style guidelines of this project
- [ ] I have performed a self-review of my code
- [ ] I have commented my code, particularly in hard-to-understand areas
- [ ] I have made corresponding changes to the documentation
- [ ] My changes generate no new warnings
- [ ] I have added tests that prove my fix is effective or that my feature works
- [ ] New and existing unit tests pass locally with my changes
- [ ] Any dependent changes have been merged and published in downstream modules
The rendered effect is as follows:
Description
Please include a summary of the changes and the related issue. Please also include relevant motivation
and context. List any dependencies that are required for this change.
Fixes # (issue)
Type of change
Please delete options that are not relevant.
Bug fix (non-breaking change which fixes an issue)
New feature (non-breaking change which adds functionality)
Breaking change (fix or feature that would cause existing functionality to not work as expected)
This change requires a documentation update
How Has This Been Tested?
Please describe the tests that you ran to verify your changes. Provide instructions so we can reproduce.
Please also list any relevant details for your test configuration
Test A
Test B
Test Configuration:
Firmware version:
Hardware:
Toolchain:
SDK:
Checklist:
My code follows the style guidelines of this project
I have added the appropriate labels
I have performed a self-review of my code
I have commented my code, particularly in hard-to-understand areas
I have made corresponding changes to the documentation
My changes generate no new warnings
I have added tests that prove my fix is effective or that my feature works
New and existing unit tests pass locally with my changes
Any dependent changes have been merged and published in downstream modules
7 - Template-ISSUE
## Description
A brief description of the issue.
## Steps to Reproduce
1. Describe the first step
2. Describe the second step
3. Describe the third step
4. ...
## Expected Result
Describe what you expected to happen.
## Actual Result
Describe what actually happened.
## Screenshots
If applicable, add screenshots to help explain your problem.
## Environment
- OS: [e.g. Windows 10, macOS 10.15, Ubuntu 20.04]
- Browser: [e.g. Chrome 86, Firefox 82, Safari 14]
- Version: [e.g. 1.0.0]
## Additional Information
Add any other context about the problem here.
The rendered effect is as follows:
Description
A brief description of the issue.
Steps to Reproduce
Describe the first step
Describe the second step
Describe the third step
…
Expected Result
Describe what you expected to happen.
Actual Result
Describe what actually happened.
Screenshots
If applicable, add screenshots to help explain your problem.
Environment
OS: [e.g. Windows 10, macOS 10.15, Ubuntu 20.04]
Browser: [e.g. Chrome 86, Firefox 82, Safari 14]
Version: [e.g. 1.0.0]
Additional Information
Add any other context about the problem here.
Checklist
I have searched the existing issues
I have added the appropriate labels
I have reproduced the issue with the latest version
I have provided a detailed description of the bug
I have provided steps to reproduce the issue
I have included screenshots (if applicable)
I have provided the environment details (OS, version, etc.)
8 - Template-UT-README
# Module Name
## Test Objectives
<Descriptionoftestobjectivesandmethods>## Test Environment
<Descriptionoftestenvironmentanddependencies>## Function Check
<Describethetargetfunctionstobetestedandthecorrespondingcheckingmethods>|No.|Module|Function Description|Checkpoint Description|Check Identifier|Check Item|
|-|-|-|-|-|-|
|-|-|-|-|-|-|
## Verification Interface
<Descriptionoftheinterface>## Test Case Description
#### Test Case 1
|Step|Operation|Expected Result|Covered Function Point|
|-|-|-|-|
|-|-|-|-|
#### Test Case 2
|Step|Operation|Expected Result|Covered Function Point|
|-|-|-|-|
|-|-|-|-|
## Directory Structure
<Descriptionofthedirectorystructureforthismodule>## Checklist
- [ ] This document meets the specified [template]() requirements
- [ ] The API provided by Env does not contain any DUT pins or timing information
- [ ] The API of Env remains stable (total [ X ])
- [ ] Supported RTL versions in Env have been checked (supported versions [ X ])
- [ ] Function points (total [ X ]) are consistent with the [design document]()
- [ ] Checkpoints (total [ X ]) cover all function points
- [ ] The input of checkpoints does not depend on any DUT pins, only on the standard API of Env
- [ ] All test cases (total [ X ]) are mapped to function checkpoints
- [ ] All test cases use assert for result checking
- [ ] All DUTs or corresponding wrappers are created via fixture
- [ ] RTL version is checked in the above fixtures
- [ ] The fixture for creating DUT or corresponding wrapper performs function and code line coverage statistics
- [ ] Filtering requirements are checked when setting code line coverage
The rendered effect is as follows:
# Module Name
## Test Objectives
<Descriptionoftestobjectivesandmethods>## Test Environment
<Descriptionoftestenvironmentanddependencies>## Function Check
<Describethetargetfunctionstobetestedandthecorrespondingcheckingmethods>|No.|Module|Function Description|Checkpoint Description|Check Identifier|Check Item|
|-|-|-|-|-|-|
|-|-|-|-|-|-|
## Verification Interface
<Descriptionoftheinterface>## Test Case Description
#### Test Case 1
|Step|Operation|Expected Result|Covered Function Point|
|-|-|-|-|
|-|-|-|-|
#### Test Case 2
|Step|Operation|Expected Result|Covered Function Point|
|-|-|-|-|
|-|-|-|-|
## Directory Structure
<Descriptionofthedirectorystructureforthismodule>## Checklist
- [ ] This document meets the specified [template]() requirements
- [ ] The API provided by Env does not contain any DUT pins or timing information
- [ ] The API of Env remains stable (total [ X ])
- [ ] Supported RTL versions in Env have been checked (supported versions [ X ])
- [ ] Function points (total [ X ]) are consistent with the [design document]()
- [ ] Checkpoints (total [ X ]) cover all function points
- [ ] The input of checkpoints does not depend on any DUT pins, only on the standard API of Env
- [ ] All test cases (total [ X ]) are mapped to function checkpoints
- [ ] All test cases use assert for result checking
- [ ] All DUTs or corresponding wrappers are created via fixture
- [ ] RTL version is checked in the above fixtures
- [ ] The fixture for creating DUT or corresponding wrapper performs function and code line coverage statistics
- [ ] Filtering requirements are checked when setting code line coverage
9 - Common APIs
comm Module
The comm module provides some commonly used APIs, which can be called in the following ways:
# import allfromcommimport*# or direct import functions you needfromcommimportfunction_you_need# or access from moduleimportcommcomm.function_you_need()
cfg Submodule
get_config(cfg=None)
Get the current Config configuration
Input: If cfg is not empty, return cfg. Otherwise, automatically get the global Config via toffee.
Get a list of all RTL files (.v or .sv) that the module named top_module depends on, and ensure that the first element of the list is the absolute path of the file where top_module is located. All RTL files are located in the UnityChipForXiangShan/rtl/rtl directory.
Input:
top_module: module name, type str
cfg: config info, type CfgObject
Output:
Returns a list of strings, each string is the absolute path of an RTL file that the module depends on. The first element of the list is the path of the file where top_module is located.
Suppose top_module is "ALU", and its dependent RTL files include ALU.sv, adder.v, and multiplier.v:
paths=get_all_rtl_files("ALU",cfg)"""
Possible contents of paths:
[
"/path/to/UnityChipForXiangShan/rtl/rtl/ALU.sv",
"/path/to/UnityChipForXiangShan/rtl/rtl/adder.v",
"/path/to/UnityChipForXiangShan/rtl/rtl/multiplier.v"
]
"""
10 - Others
Test Case Management
If test cases are closely related to the target RTL version, changes in RTL may render previous test cases unsuitable. In addition, different scenarios have different requirements, such as not running time-consuming cases when verifying the test environment. Therefore, test cases need to be managed so that users can skip certain cases in specific scenarios. To achieve this, we use pytest.mark.toffee_tags to tag and version each test case. Then, in the configuration file, you can set which tags to skip or which tags to run.
For example, the above test_case_1 is tagged with my_tag and supports versions from version1 to version13. Therefore, you can specify test.skip-tags=["my_tag"] in the configuration file to skip this case during execution.
The parameters for pytest.mark.toffee_tags are as follows:
@pytest.mark.toffee_tags(tag:Optional[list,str]=[]# Case tagversion:Optional[list,str]=[],# RTL version requirement for the caseskip:callable=None,# Custom skip logic, skip(tag, version, item): (skip, reason))
The tag parameter of toffee_tags supports both str and list[str] types. The version parameter can also be str or list[str]. If it is a list, it matches exactly; if it is a string, the matching rules are as follows:
name-number1 < name-number2: means the version must be between number1 and number2 (inclusive, number can be a decimal, e.g., 1.11)
name-number1+: means version number1 and later
name-number1-: means version number1 and earlier
If none of the above, and there is a * or ?, it is treated as a wildcard. Other cases are exact matches.
Predefined tags can be found in comm/constants.py, for example:
In the default configuration (config/_default.yaml), tests marked with LONG_TIME_RUN, REGRESSION, RARELY_USED, and CI are filtered out.
You can use @pytest.mark.toffee_tags to add tags to each case, or define the following variables in a module to add tags to all test cases in the module:
toffee_tags_default_tag=[]# Corresponds to the tag parametertoffee_tags_default_version=[]# Corresponds to the version parametertoffee_tags_default_skip=None# Corresponds to the skip parameter
*Note: The version number in this environment will automatically filter out git tags. For example, if the downloaded RTL is named openxiangshan-kmh-97e37a2237-24092701.tar.gz, its version number in this project is openxiangshan-kmh-24092701, which can be obtained via cfg.rtl.version or comm.get_config().rtl.version.
Version Checking
In addition to using the toffee_tags tag for automatic version checking, you can also actively check versions via get_version_checker. A unit test usually consists of a test environment (Test Env) and test cases (Test Case). The Env encapsulates RTL pins and functions, then provides a stable API to the Case, so version checking is needed in the Env to determine whether to skip all test cases using this environment. For example, in Env:
...fromcommimportget_version_checkerversion_check=get_version_checker("openxiangshan-kmh-*")# Get RTL version checker, same as the version parameter in toffee_tags@pytest.fixture()defmy_fixture(request):version_check()# Actively check in the fixture....yielddut...
In the above example, the Env actively performs version checking in the fixture named my_fixture. Therefore, every time the test case calls it, version checking is performed, and if the check fails, the case will be skipped.
Repository Directory Structure
UnityChipForXiangShan
├── LICENSE # Open source license├── Makefile # Main Makefile├── README.en.md # English readme├── README.zh.md # Chinese readme├── __init__.py # Python module file, allows importing UnityChipForXiangShan as a module├── pytest.ini # PyTest configuration file├── comm # Common components: logs, functions, configs, etc.├── configs # Configuration files directory├── documents # Documentation├── dut # DUT generation directory├── out # Output directory for logs, reports, etc.├── requirements.txt # Python dependencies├── rtl # RTL cache├── run.py # Main Python entry file├── scripts # DUT compilation scripts├── tools # Common tool modules├── ut_backend # Backend test cases├── ut_frontend # Frontend test cases├── ut_mem_block # Memory access test cases└── ut_misc # Other test cases
Configuration File Description
Default configuration and explanation:
# Default configuration file# Configuration loading order: _default.yaml -> user-specified *.yaml -> command line parameters eg: log.term-level='debug'# RTL configurationrtl:# RTL download address, all *.gz.tar files from this address are treated as target RTLbase-url:https://<your_rtl_download_address># RTL version to download, e.g., openxiangshan-kmh-97e37a2237-24092701version:latest# Directory to store RTL, relative to the current config file pathcache-dir:"../rtl"# Test case configuration (tag and case support wildcards)test:# Skip tags, all test cases with these tags will be skippedskip-tags:["LONG_TIME_RUN","RARELY_USED","REGRESSION","CI"]# Target tags, only test cases with these tags will be executed (skip-tags overrides run-tags)run-tags:[]# Skipped test cases, all test cases (or module names) with these names will be skipped.skip-cases:[]# Target test cases, only test cases (or module names) with these names will be executed (skip-cases overrides run-cases).run-cases:[]# Skip exceptions, all test cases that throw these exceptions will be skippedskip-exceptions:[]# Output configurationoutput:# Output directory, relative to the current config file pathout-dir:"../out"# Test report configurationreport:# Report generation directory, relative to output.out-dirreport-dir:"report"# Report name, supports variable substitution: %{host} hostname, %{pid} process ID, %{time} current timereport-name:"%{host}-%{pid}-%{time}/index.html"# Report contentinformation:# Report titletitle:"XiangShan KMH Test Report"# Report user informationuser:name:"User"email:"User@example.email.com"# Target line coverage, e.g., 90 means 90%line_grate:99# Other information to display, key is the title, value is the contentmeta:Version:"1.0"# Log configurationlog:# Root output levelroot-level:"debug"# Terminal output levelterm-level:"info"# File log output directoryfile-dir:"log"# File log name, supports variable substitution: %{host} hostname, %{pid} process ID, %{time} current timefile-name:"%{host}-%{pid}-%{time}.log"# File log output levelfile-level:"info"# Test result configuration (this data is used to populate statistics charts in documents, original data comes from toffee-test generated reports)# After running the tests, you can view the results via `make doc`doc-result:# Whether to enable test result post-processingdisable:False# Organizational structure configuration of target DUTdutree:"%{root}/configs/dutree/xiangshan-kmh.yaml"# Result name, will be saved to the output report directoryresult-name:"ut_data_progress.json"# Symlink to the created test report for hugoreport-link:"%{root}/documents/static/data/reports"
You can add custom parameters in the above configuration file, get global config info via cfg = comm.get_config(), and then access via cfg.your_key. The cfg info is read-only and cannot be modified by default.
11 - Required Specifications
In order to facilitate the integration of everyone’s contributions, it is necessary to adopt the same “specifications” in coding, environment, and documentation.
Environment Requirements
python: When coding in Python, use the standard library as much as possible, and use general syntax compatible with most Python 3 versions (try to be compatible with Python 3.6 - Python 3.12). Do not use syntax that is too old or too new.
Operating System: Ubuntu 22.04 is recommended. On Windows, it is recommended to use the WSL2 environment.
hugo: Recommended version is 0.124.1 (older versions do not support symlinks)
Minimal dependencies: Try to minimize the use of third-party C++/C libraries.
picker: It is recommended to install the picker tool and xspcomm library via wheel.
Test Cases
Code Style: It is recommended to follow the PEP 8 standard
Build Scripts: The naming of build scripts must follow the DUT naming structure, otherwise verification results cannot be collected correctly. For example, the build file for the backend.ctrl_block.decode UT in the scripts directory should be named build_ut_backend_ctrl_block_decode.py (with the fixed prefix build_ut_, and dots . replaced by underscores _). The script should implement the build(cfg) -> bool and line_coverage_files(cfg) -> list[str] methods. build is used to compile the DUT into a Python module, and line_coverage_files is used to return the files for code line coverage statistics.
Test Case Tags: If a test case cannot be version-agnostic, it needs to be marked with pytest.mark.toffee_tags to indicate the supported versions.
Test Case Abstraction: The input of the test case should not contain specific DUT pins or other strongly coupled content. Only functions encapsulated on top of the DUT can be called. For example, for an adder, the DUT’s target function should be encapsulated as dut_wrapper.add(a: int, b: int) -> int, bool, and in the test_case, only sum, c = add(a, b) should be called for testing.
Coverage Abstraction: When writing functional coverage, the input of the checkpoint function should also not include DUT pins.
Environment Abstraction: For a verification, it is usually divided into two parts: Test Case and Env (everything except the test case is called Env, which includes DUT, drivers, monitors, etc.). The Env should provide abstract functional interfaces to the outside and should not expose too many details.
Test Documentation: In the verification environment of each DUT, a README.md should be provided to explain the environment, such as the interfaces provided by Env to Case, directory structure, etc.
PR Writing
Title: Concise and clear, able to summarize the main content of the PR.
Detailed Description: Clearly explain the purpose of the PR, the changes made, and relevant background information. If solving an existing issue, provide a link (e.g., Issue).
Related Issues: Link related issues in the description, such as Fixes #123, so that the related issue is closed when the PR is merged.
Testing: Testing is required, and the test results should be described.
Documentation: Any documentation involved in the PR should be updated accordingly.
Decomposition: If the PR involves many changes, consider splitting it into multiple PRs.
Checklist: Check whether compilation passes, code style is reasonable, tests pass, necessary comments are present, etc.
Template: Please refer to the provided PR template reference link.
ISSUE Writing
Same requirements as above.
12 - Maintainers
When submitting an issue, pull request, or discussion, specifying the maintainer of the corresponding module can help you get a quicker response. The current maintainers are listed below (in alphabetical order):