Introduction to chip verification using the Guoke Cache as an example, covering the basic verification process and report writing.
This is the multi-page printable view of this section. Click here to print.
Verification Basics
1 - Chip Verification
This page provides a brief introduction to chip verification, including concepts used in examples such as DUT (Design Under Test) and RM (Reference Model).
The chip verification process needs to align with the actual situation of the company or team. There is no absolute standard that meets all requirements and must be referenced.
What is Chip Verification?
The chip design-to-production process involves three main stages: chip design, chip manufacturing, and chip packaging/testing. Chip design is further divided into front-end and back-end design. Front-end design, also known as logic design, aims to achieve the desired circuit logic functionality. Back-end design, or physical design, focuses on optimizing layout and routing to reduce chip area, lower power consumption, and increase frequency. Chip verification is a critical step in the chip design process. Its goal is to ensure that the designed chip meets the specified requirements in terms of functionality, performance, and power consumption. The verification process typically includes functional verification, timing verification, and power verification, using methods and tools such as simulation, formal verification, hardware acceleration, and prototyping. For this tutorial, chip verification refers only to the verification of the front-end design to ensure that the circuit logic meets the specified requirements (“Does this proposed design do what is intended?”), commonly known as functional verification. This does not include back-end design aspects like power and frequency.
For chip products, design errors that make it to production can be extremely costly to fix, as it might require recalling products and remanufacturing chips, incurring significant financial and time costs. Here are some classic examples of failures due to inadequate chip verification:
Intel Pentium FDIV Bug:In 1994, Intel’s Pentium processor was found to have a severe division error known as the FDIV bug. This error was due to incorrect entries in a lookup table within the chip’s floating-point unit. Although it rarely affected most applications, it caused incorrect results in specific calculations. Intel had to recall a large number of processors, leading to significant financial losses.
Ariane 5 Rocket Failure:Though not a chip example, this highlights the importance of hardware verification. In 1996, the European Space Agency’s Ariane 5 rocket exploded shortly after launch due to an overflow when converting a 64-bit floating-point number to a 16-bit integer in the navigation system, causing the system to crash. This error went undetected during design and led to the rocket’s failure.
AMD Barcelona Bug:In 2007, AMD’s Barcelona processor had a severe Translation Lookaside Buffer (TLB) error that could cause system crashes or reboots. AMD had to mitigate this by lowering the processor’s frequency and releasing BIOS updates, which negatively impacted their reputation and financial status.
These cases emphasize the importance of chip verification. Errors detected and fixed during the design phase can prevent these costly failures. Insufficient verification continues to cause issues today, such as a new entrant in the ASIC chip market rushing a 55nm chip without proper verification, leading to three failed tape-outs and approximately $500,000 in losses per failure.
Chip Verification Process
The coupling relationship between chip design and verification is shown in the diagram above. Both design and verification have the same input: the specification document. Based on this document, both design and verification teams independently code according to their understanding and requirements. The design team needs to ensure that the RTL code is “synthesizable,” considering circuit characteristics, while the verification team mainly focuses on whether the functionality meets the requirements, with fewer coding constraints. After both teams complete module development, a sanity test is conducted to check if the functionality matches. If there are discrepancies, collaborative debugging is done to identify and fix issues before retesting. Due to the high coupling between chip design and verification, some companies directly couple their design and verification teams, assigning verification teams to each design submodule. The coupling process in the diagram is coarse-grained, with specific chips (e.g., SoC, DDR) and companies having their cooperation models.
In the above comparison test, the module produced by the design team is usually called DUT (Design Under Test), while the model developed by the verification team is called RM (Reference Model). The verification process includes: writing a verification plan, creating a verification platform, organizing functional points, constructing test cases, running and debugging, collecting bugs/coverage, regression testing, and writing test reports.
Verification Plan: The verification plan describes how verification will be carried out and how verification quality will be ensured to meet functional verification requirements. It typically includes verification goals, strategies, environment, items, process, risk mitigation, resources, schedule, results, and reports. Verification goals specify the functions or performance metrics to be verified, directly extracted from the chip specification. Verification strategy outlines the methods to be used, such as simulation, formal verification, FPGA acceleration, etc., and how to organize the verification tasks. The verification environment details the specific testing environment, including verification tools and versions. The verification item library lists specific items to be verified and expected results. Verification plans can be general or specific to sub-tasks.
Platform Setup: The verification platform is the execution environment for specific verification tasks. Similar verification tasks can use the same platform. Setting up the platform is a key step, including choosing verification tools (e.g., software simulation, formal verification, hardware acceleration), configuring the environment (e.g., server, FPGA), creating the test environment, and basic test cases. Initial basic test cases are often called “smoke tests.” Subsequent test codes are based on this platform, so it must be reusable. The platform includes the test framework, the code being tested, and basic signal stimuli.
Organizing Functional Points: This involves listing the DUT’s basic functions based on the specification manual and detailing how to test each function. Functional points are prioritized based on importance, risk, and complexity. They also need to be tracked for status, with updates synchronized to the plan if changes occur.
Test Cases These are conditions or variables used to determine if the DUT meets specific requirements and operates correctly. Each case includes test conditions, input data, expected results, actual results, and test outcomes. Running test cases and comparing expected vs. actual results help verify the system or application’s correct implementation of functions or requirements. Test cases are crucial tools for verifying chip design against specifications.
Coding Implementation: This is the execution of test cases, including generating test data, selecting the test framework, programming language, and writing the reference model. This phase requires a deep understanding of functional points and test cases. Misunderstandings can lead to the DUT being undrivable or undetected bugs.
Collecting Bugs/Coverage: The goal of verification is to find design bugs early, so collected bugs need unique identifiers, severity ratings, and status tracking with design engineers. Discovering bugs is ideal, but since not every test finds bugs, coverage is another metric to evaluate verification thoroughness. Sufficient verification is indicated when coverage (e.g., code coverage >90%) exceeds a threshold.
Regression Testing: As verification and design are iterative, regression tests ensure the modified DUT still functions correctly after bug fixes. This catches new errors or reactivates old ones due to changes. Regression tests can be comprehensive or selective, covering all functions or specific parts.
Test Report: This summarizes the entire verification process, providing a comprehensive view of the testing activities, including objectives, executed test cases, discovered issues, coverage, and efficiency.
Levels of Chip Verification
Chip verification typically includes four levels based on the object size: UT, BT, IT, and ST.
Unit Testing(UT): The lowest verification level, focusing on single modules or components to ensure their functionality is correct.
Block Testing (BT) : Modules often have tight coupling, making isolated UT testing complex. BT merges several coupled modules into one DUT block for testing.
Integration Testing (IT) : Builds on UT by combining multiple modules or components to verify their collaborative functionality, usually testing subsystem functionality.
System Testing (ST) : Also called Top verification, ST combines all modules or components into a complete system to verify overall functionality and performance requirements.
In theory, these levels follow a bottom-up order, each building on the previous level. However, practical verification activities depend on the scale, expertise, and functional needs of the enterprise, so not all levels are always involved. At each level, relevant test cases are written, tests run, and results analyzed to ensure the chip design’s correctness and quality.
Chip Verification Metrics
Verification metrics typically include functional correctness, test coverage, defect density, verification efficiency, and verification cost. Functional correctness is the fundamental metric, ensuring the chip executes its designed functions correctly. This is validated through functional test cases, including normal and robustness tests. Test coverage indicates the extent to which test cases cover design functionality, with higher coverage implying higher verification quality. Coverage can be further divided into code coverage, functional coverage, condition coverage, etc. Defect density measures the number of defects found in a given design scale or code volume, with lower density indicating higher design quality. Verification efficiency measures the amount of verification work completed within a given time and resource frame, with higher efficiency indicating higher productivity. Verification cost encompasses all resources required for verification, including manpower, equipment, and time, with lower costs indicating higher cost-effectiveness.
Functional correctness is the absolute benchmark for verification. However, in practice, it is often impossible to determine if the test plan is comprehensive and if all test spaces have been adequately covered. Therefore, a quantifiable metric is needed to guide whether verification is sufficient and when it can be concluded. This metric is commonly referred to as “test coverage.” Test coverage typically includes code coverage (lines, functions, branches) and functional coverage.
Code Line Coverage: This indicates how many lines of the DUT design code were executed during testing.
Function Coverage: This indicates how many functions of the DUT design code were executed during testing.
Branch Coverage: This indicates how many branches (if-else) of the DUT design code were executed during testing.
Functional Coverage: This indicates how many predefined functions were triggered during testing.
High code coverage can improve the quality and reliability of verification but does not guarantee complete correctness since it cannot cover all input and state combinations. Therefore, in addition to pursuing high code coverage, other testing methods and metrics, such as functional testing, performance testing, and defect density, should be combined.
Chip Verification Management
Chip verification management is a comprehensive process that encompasses all activities in the chip verification process, including the development of verification strategies, the setup of the verification environment, the writing and execution of test cases, the collection and analysis of results, and the tracking and resolution of issues and defects. The goal of chip verification management is to ensure that the chip design meets all functional and performance requirements, as well as specifications and standards.
In chip verification management, the first step is to formulate a detailed verification strategy, including objectives, scope, methods, and schedules. Then, a suitable verification environment must be set up, including hardware, software tools, and test data. Next, a series of test cases covering all functional and performance points must be written and executed, with results collected and analyzed to identify problems and defects. Finally, these issues and defects need to be tracked and fixed until all test cases pass.
Chip verification management is a complex process requiring a variety of skills and knowledge, including chip design, testing methods, and project management. It requires close collaboration with other activities, such as chip design, production, and sales, to ensure the quality and performance of the chip. The effectiveness of chip verification management directly impacts the success of the chip and the company’s competitiveness. Therefore, chip verification management is a crucial part of the chip development process.
The chip verification management process can be based on a “project management platform” and a “bug management platform,” with platform-based management typically being significantly more efficient than manual management.
Current State of Chip Verification
Currently, chip verification is typically completed within chip design companies. This process is not only technically complex but also entails significant costs. Given the close relationship between acceptance and design, chip verification inevitably involves the source code of the chip design. However, chip design companies usually consider the source code as a trade secret, necessitating internal personnel to perform the verification, making outsourcing difficult.

The importance of chip verification lies in ensuring that the designed chip operates reliably under various conditions. Verification is not only for meeting technical specifications but also for addressing the growing complexity and emerging technology demands. As the semiconductor industry evolves, the workload of chip verification has been continuously increasing, especially for complex chips, where verification work has exceeded design work, accounting for more than 70%. This means that in terms of engineer personnel ratio, verification engineers are usually twice the number of design engineers (e.g., in a team of three thousand at Zeku, there are about one thousand design engineers and two thousand verification engineers. Similar or higher ratios apply to other large chip design companies).
Due to the specificity of verification work, which requires access to the chip design source code, it significantly limits the possibility of outsourcing chip verification. The source code is considered the company’s core trade secret, involving technical details and innovations, thus making it legally and securely unfeasible to share with external parties. Consequently, internal personnel must shoulder the verification work, increasing the internal workload and costs.
Given the current situation, the demand for chip verification engineers continues to grow. They need a solid technical background, familiarity with various verification tools and methods, and keen insight into emerging technologies. Due to the complexity of verification work, verification teams typically need a large scale, contrasting sharply with the design team size.
To meet this challenge, the industry may need to continuously explore innovative verification methods and tools to improve efficiency and reduce costs.
Summary: Complex Chip Verification Costs
High Verification Workload: For complex chips, verification work accounts for over 70% of the entire chip design work.
High Labor Costs: The number of verification engineers is twice that of design engineers, with complex tasks requiring thousands of engineers.
Internal Verification: To ensure trade secrets (chip design code) are not leaked, chip design companies can only hire a large number of verification engineers to perform verification work internally.
Crowdsourcing Chip Verification
In contrast to hardware, the software field has already made testing outsourcing (subcontracting) a norm to reduce testing costs. This business is highly mature, with a market size in the billions of yuan, advancing towards the trillion-yuan scale. From the content perspective, software testing and hardware verification share significant similarities (different targets with the same system objective). Is it feasible to subcontract hardware verification in the same way as software?
Crowdsourcing chip verification faces many challenges, such as:
Small Number of Practitioners: Compared to the software field, the number of hardware developers is several orders of magnitude smaller. For instance, according to GitHub statistics (https://madnight.github.io/githut/#/pull_requests/2023/2), traditional software programming languages (Python, Java, C++, Go) account for nearly 50%, whereas hardware description languages like Verilog account for only 0.076%, reflecting the disparity in developer numbers.
Commercial Verification Tools: The verification tools used in enterprises (simulators, formal verification, data analysis) are almost all commercial tools, which are nearly invisible to ordinary people and difficult to self-learn.
Lack of Open Learning Materials: Chip verification involves accessing the chip design source code, which is typically regarded as the company’s trade secrets and proprietary technology. Chip design companies may be unwilling to disclose detailed verification processes and techniques, limiting the availability of learning materials.
Feasibility Analysis
Although the chip verification field has been relatively closed, from a technical perspective, adopting a subcontracting approach for verification is a feasible option due to several factors:
Firstly, with the gradual increase of open-source chip projects, the source code involved in verification has become more open and transparent. These open-source projects do not have concerns about trade secrets in their design and verification process, providing more possibilities for learning and research. Even if some projects involve trade secrets, encryption and other methods can be used to hide design codes, addressing trade secret issues to a certain extent and making verification easier to achieve.
Secondly, many fundamental verification tools have emerged in the chip verification field, such as Verilator and SystemC. These tools provide robust support for verification engineers, helping them perform verification work more efficiently. These tools alleviate some of the complexity and difficulty of the verification process, providing a more feasible technical foundation for adopting subcontracted verification methods.
In the open-source software field, some successful cases can be referenced. For example, the Linux kernel verification process adopts a subcontracting approach, with different developers and teams responsible for verifying different modules, ultimately forming a complete system. Similarly, in the machine learning field, the ImageNet project adopted a crowdsourced annotation strategy, completing large-scale image annotation tasks through crowdsourcing. These cases provide successful experiences for the chip verification field, proving the potential of subcontracted verification to improve efficiency and reduce costs.
Therefore, despite the chip verification field being relatively closed compared to other technical fields, technological advances and the increase of open-source projects offer new possibilities for adopting subcontracted verification. By drawing on successful experiences from other fields and utilizing existing verification tools, we can promote the application of more open and efficient verification methods in chip verification, further advancing the industry. This openness and flexibility in technology will provide more choices for verification engineers, promoting innovative and diverse development in the chip verification field.
Technical Route
To overcome challenges and engage more people in chip verification, this project continuously attempts the following technical directions:
Provide Multi-language Verification Tools: Traditional chip verification is based on the System Verilog programming language, which has a small user base. To allow other software development/testing professionals to participate in chip verification, this project provides multi-language verification conversion tools Picker, enabling verifiers to use familiar programming languages (e.g., C++, Python, Java, Go) with open-source verification tools.
Provide Verification Learning Materials: The scarcity of chip verification learning materials is mainly due to the improbability of commercial companies disclosing internal data. Therefore, this project will continuously update learning materials, allowing verifiers to learn the necessary skills online for free.
Provide Real Chip Verification Cases: To make the learning materials more practical, this project uses the “Xiangshan Kunming Lake (an industrial-grade high-performance RISC-V processor) IP core” as a basis, continuously updating verification cases by extracting modules from it.
Organize Chip Design Subcontracted Verification: Applying what is learned is the goal of every learner. Therefore, this project periodically organizes subcontracted chip design verification, allowing everyone (whether you are a university student, verification expert, software developer, tester, or high school student) to participate in real chip design work.
The goal of this project is to achieve the following vision: “Open the black box of traditional verification modes, allowing anyone interested to participate in chip verification anytime, anywhere, using their preferred programming language.”

2 - Digital Circuits
This page introduces the basics of digital circuits. Digital circuits use digital signals and are the foundation of most modern computers.
What Are Digital Circuits
Digital circuits are electronic circuits that use two discrete voltage levels to represent information. Typically, digital circuits use two power supply voltages to indicate high (H) and low (L) levels, representing the digits 1 and 0 respectively. This representation uses binary signals to transmit and process information.
Most digital circuits are built using field-effect transistors, with MOSFETs (Metal-Oxide-Semiconductor Field-Effect Transistors) being the most common. MOSFETs are semiconductor devices that control current flow using an electric field, enabling digital signal processing.
In digital circuits, MOSFETs are combined to form various logic gates like AND, OR, and NOT gates. These logic gates are combined in different ways to create the various functions and operations in digital circuits. Here are some key features of digital circuits:
(1) Voltage Representation: Digital circuits use two voltage levels, high and low, to represent digital information. Typically, a high level represents the digit 1, and a low level represents the digit 0.
(2) MOSFET Implementation: MOSFETs are one of the most commonly used components in digital circuits. By controlling the on and off states of MOSFETs, digital signal processing and logic operations can be achieved.
(3) Logic Gate Combinations: Logic gates, composed of MOSFETs, are the basic building blocks of digital circuits. By combining different logic gates, complex digital circuits can be built to perform various logical functions.
(4) Binary Representation: Information in digital circuits is typically represented using the binary system. Each digit can be made up of a series of binary bits, which can be processed and operated on within digital circuits.
(5) Signal Processing: Digital circuits convert and process signals through changes in voltage and logic operations. This discrete processing method makes digital circuits well-suited for computing and information processing tasks.
Why Learn Digital Circuits
Learning digital circuits is fundamental and necessary for the chip verification process, primarily for the following reasons:
(1) Understanding Design Principles: Digital circuits are the foundation of chip design. Knowing the basic principles and design methods of digital circuits is crucial for understanding the structure and function of chips. The goal of chip verification is to ensure that the designed digital circuits work according to specifications in actual hardware, and understanding digital circuits is key to comprehending the design.
(2) Design Standards: Chip verification typically involves checking whether the design meets specific standards and functional requirements. Learning digital circuits helps in understanding these standards, thus building better test cases and verification processes to ensure thorough and accurate verification.
(3) Timing and Clocks: Timing issues are common challenges in digital circuit design and verification. Learning digital circuits helps in understanding concepts of timing and clocks, ensuring that timing issues are correctly handled during verification, avoiding timing delays and conflicts in the circuit.
(4) Logical Analysis: Chip verification often involves logical analysis to ensure circuit correctness. Learning digital circuits fosters a deep understanding of logic, aiding in logical analysis and troubleshooting.
(5) Writing Test Cases: In chip verification, various test cases need to be written to ensure design correctness. Understanding digital circuits helps in designing comprehensive and targeted test cases, covering all aspects of the circuit.
(6) Signal Integrity: Learning digital circuits helps in understanding signal propagation and integrity issues within circuits. Ensuring proper signal transmission under different conditions is crucial, especially in high-speed designs.
Overall, learning digital circuits provides foundational knowledge and tools for chip verification, enabling verification engineers to better understand designs, write effective test cases, analyze verification results, and troubleshoot issues. Theoretical and practical experience with digital circuits is indispensable for chip verification engineers.
Digital Circuits Basics
You can learn digital circuits through the following online resources:
- Tsinghua University’s Digital Circuits Basics
- USTC Digital Circuit Lab
- Digital Design and Computer Architecture
- MIT Analysis and Design of Digital Integrated Circuits
Hardware Description Language Chisel
Traditional Description Languages
Hardware Description Languages (HDL) are languages used to describe digital circuits, systems, and hardware. They allow engineers to describe hardware structure, function, and behavior through text files, enabling abstraction and modeling of hardware designs.
HDL is commonly used for designing and simulating digital circuits such as processors, memory, controllers, etc. It provides a formal method to describe the behavior and structure of hardware circuits, making it easier for design engineers to perform hardware design, verification, and simulation.
Common hardware description languages include:
- Verilog:One of the most used HDLs, Verilog is an event-driven language widely used for digital circuit design, verification, and simulation.
- VHDL:Another common HDL, VHDL is an object-oriented language offering richer abstraction and modular design methods.
- SystemVerilog:An extension of Verilog, SystemVerilog introduces advanced features like object-oriented programming and randomized testing, making Verilog more suitable for complex system design and verification.
Chisel
Chisel is a modern, advanced hardware description language that differs from traditional Verilog and VHDL. It’s a hardware construction language based on Scala. Chisel offers a more modern and flexible way to describe hardware, leveraging Scala’s features to easily implement parameterization, abstraction, and reuse while maintaining hardware-level efficiency and performance.
Chisel’s features include:
- Modern Syntax: Chisel’s syntax is more similar to software programming languages like Scala, making hardware description more intuitive and concise.
- Parameterization and Abstraction: Chisel supports parameterization and abstraction, allowing for the creation of configurable and reusable hardware modules.
- Type Safety: Based on Scala, Chisel has type safety features, enabling many errors to be detected at compile-time.
- Generating Performance-Optimized Hardware: Chisel code can be converted to Verilog and then synthesized, placed, routed, and simulated by standard EDA toolchains to generate performance-optimized hardware.
- Strong Simulation Support: Chisel provides simulation support integrated with ScalaTest and Firrtl, making hardware simulation and verification more convenient and flexible.
Chisel Example of a Full Adder
The circuit design is shown below:

Complete Chisel code:
package examples
import chisel3._
class FullAdder extends Module {
// Define IO ports
val io = IO(new Bundle {
val a = Input(UInt(1.W)) // Input port 'a' of width 1 bit
val b = Input(UInt(1.W)) // Input port 'b' of width 1 bit
val cin = Input(UInt(1.W)) // Input port 'cin' (carry-in) of width 1 bit
val sum = Output(UInt(1.W)) // Output port 'sum' of width 1 bit
val cout = Output(UInt(1.W))// Output port 'cout' (carry-out) of width 1 bit
})
// Calculate sum bit (sum of a, b, and cin)
val s1 = io.a ^ io.b // XOR operation between 'a' and 'b'
io.sum := s1 ^ io.cin // XOR operation between 's1' and 'cin', result assigned to 'sum'
// Calculate carry-out bit
val s3 = io.a & io.b // AND operation between 'a' and 'b', result assigned to 's3'
val s2 = s1 & io.cin // AND operation between 's1' and 'cin', result assigned to 's2'
io.cout := s2 | s3 // OR operation between 's2' and 's3', result assigned to 'cout'
}
You can refer to Chisel learning materials from the official documentation: https://www.chisel-lang.org/docs
3 - Creating DUT
Using Guoke Cache as an example, this document introduces how to create a DUT based on Chisel.
In this document, a DUT (Design Under Test) refers to the circuit or system being verified during the chip verification process. The DUT is the primary subject of verification. When creating a DUT based on the picker tool, it is essential to consider the functionality, performance requirements, and verification goals of the subject under test. These goals may include the need for faster execution speed or more detailed test information. Generally, the DUT, written in RTL, is combined with its surrounding environment to form the verification environment (test_env), where test cases are written. In this project, the DUT is the Python module that needs to be tested and converted through RTL. Traditional RTL languages include Verilog, System Verilog, VHDL, etc. However, as an emerging RTL design language, (https://www.chisel-lang.org/) is playing an increasingly important role in RTL design due to its object-oriented features and ease of use. This chapter introduces how to create a DUT using the conversion of the cache source code from the Guoke Processor-NutShell to a Python module as an example.
Chisel and Guoke
Chisel is a high-level hardware construction language (HCL) based on the Scala language. Traditional HDLs describe circuits, while HCLs generate circuits, making them more abstract and advanced. The Stage package provided in Chisel can convert HCL designs into traditional HDL languages such as Verilog and System Verilog. With tools like Mill and Sbt, automation in development can be achieved.
Guoke is a sequential single-issue processor implementation based on the RISC-V RV64 open instruction set, modularly designed using the Chisel language. For a more detailed introduction to Guoke, please refer to the link: https://oscpu.github.io/NutShell-doc/.
Guoke cache
The Guoke Cache (Nutshell Cache) is the cache module used in the Guoke processor. It features a three-stage pipeline design. When the third stage pipeline detects that the current request is MMIO or a refill occurs, it will block the pipeline. The Guoke Cache also uses a customizable modular design that can generate different-sized L1 Caches or L2 Caches by changing parameters. Additionally, the Guoke Cache has a coherence interface to handle coherence-related requests.
Chisel to Verilog
The stage library in Chisel helps generate traditional HDL code such as Verilog and System Verilog from Chisel code. Below is a brief introduction on how to convert a cache implementation based on Chisel into the corresponding Verilog circuit description.
Initializing the Guoke Environment
First, download the entire Guoke source code from the source repository and initialize it:
mkdir cache-ut
cd cache-ut
git clone https://github.com/OSCPU/NutShell.git
cd NutShell && git checkout 97a025d
make init
Creating Scala Compilation Configuration
Then, create build.sc in the cache-ut directory with the following content:
import $file.NutShell.build
import mill._, scalalib._
import coursier.maven.MavenRepository
import mill.scalalib.TestModule._
// Specify Nutshell dependencies
object difftest extends NutShell.build.CommonNS {
override def millSourcePath = os.pwd / "NutShell" / "difftest"
}
// Nutshell configuration
object NtShell extends NutShell.build.CommonNS with NutShell.build.HasChiselTests {
override def millSourcePath = os.pwd / "NutShell"
override def moduleDeps = super.moduleDeps ++ Seq(
difftest,
)
}
// UT environment configuration
object ut extends NutShell.build.CommonNS with ScalaTest{
override def millSourcePath = os.pwd
override def moduleDeps = super.moduleDeps ++ Seq(
NtShell
)
}
Instantiating cache
After creating the configuration information, create the src/main/scala source code directory according to the Scala specification. Then, in the source code directory, create nut_cache.scala and use the following code to instantiate the Cache and convert it into Verilog code:
package ut_nutshell
import chisel3._
import chisel3.util._
import nutcore._
import top._
import chisel3.stage._
object CacheMain extends App {
(new ChiselStage).execute(args, Seq(
ChiselGeneratorAnnotation(() => new Cache()(CacheConfig(ro = false, name = "tcache", userBits = 16)))
))
}
Generating RTL
After creating all the files (build.sc, src/main/scala/nut_cache.scala), execute the following command in the cache-ut directory:
mkdir build
mill --no-server -d ut.runMain ut_nutshell.CacheMain --target-dir build --output-file Cache
Note: For the Mill environment configuration, please refer to https://mill-build.com/mill/Intro_to_Mill.html.
After successfully executing the above command, a Verilog file Cache.v will be generated in the build directory. Then, the picker tool can be used to convert Cache.v into a Python module. Besides Chisel, almost all other HCL languages can generate corresponding RTL codes, so the basic process above also applies to other HCLs.
DUT Compilation
Generally, if you need the DUT to generate waveforms, coverage, etc., it will slow down the DUT’s execution speed. Therefore, when generating a Python module through the picker tool, it will be generated according to various configurations: (1) Turn off all debug information; (2) Enable waveforms; (3) Enable code line coverage. The first configuration aims to quickly build the environment for regression testing, etc.; the second is used to analyze specific errors, timing, etc.; the third is used to improve coverage.
4 - DUT Verification
This section introduces the general process of verifying a DUT based on Picker.
The goal of the open verification platform is functional verification, which generally involves the following steps:
1. Determine the verification object and goals
Typically, the design documentation of the DUT is also delivered to the verification engineer. At this point, you need to read the documentation or source code to understand the basic functions, main structure, and expected functionalities of the verification object.
2. Build the basic verification environment
After fully understanding the design, you need to build the basic verification environment. For example, in addition to the DUT generated by Picker, you may also need to set up a reference model for comparison and a signal monitoring platform for evaluating subsequent functional points.
3. Decompose functional points and test points
Before officially starting the verification, you need to extract the functional points and further decompose them into test points. You can refer to: CSDN: Chip Verification Series - Decomposition of Testpoints
4. Construct test cases
With the test points, you need to construct test cases to cover the corresponding test points. A test case may cover multiple test points.
5. Collect test results
After running all the test cases, you need to summarize all the test results. Generally, this includes line coverage and functional coverage. The former can be obtained through the coverage function provided by the Picker tool, while the latter requires you to judge whether a function is covered by the test cases through monitoring the behavior of the DUT.
6. Evaluate the test results
Finally, you need to evaluate the obtained results, such as whether there are design errors, whether a function cannot be triggered, whether the design documentation description is consistent with the DUT behavior, and whether the design documentation is clearly described.
Next, we will introduce the general verification process usingMMIO read and write of Nutshell Cache as an example:
1 Determine the verification object and goals::
The MMIO read and write function of the Nutshell Cache. MMIO is a special type of IO mapping that supports accessing IO device registers by accessing memory addresses. Since the register state of IO devices can change at any time, it is not suitable to cache it. When receiving an MMIO request, the Nutshell cache will directly access the MMIO memory area to read or write data instead of querying hit/miss in the ordinary cache line.
2 Build the basic verification environment::
We can roughly divide the verification environment into five parts:
1. Testcase Driver:Responsible for generating corresponding signals driven by test cases 2. Monitor:Monitors signals to determine whether functions are covered and correct 3. Ref Cache:A simple reference model 4. Memory/MMIO Ram:Simulates peripheral devices to simulate corresponding cache requests 5. Nutshell Cache Dut:DUT generated by Picker
In addition, you may need to further encapsulate the DUT interface to achieve more convenient read and write request operations. For details, refer to Nutshll cachewrapper.
3 Decompose functional points and test points:
Nutshell cache can respond to MMIO requests, further decomposing into the following test points:
Test Point 1:MMIO requests will be forwarded to the MMIO port Test Point 2:The cache will not issue burst transfer requests when responding to MMIO requests Test Point 3:The cache will block the pipeline when responding to MMIO requests
4 Construct test cases:
The construction of test cases is simple. Knowing that the MMIO address range of the Nutshell cache obtained through Creating DUTis 0x30000000~0x7fffffff, we only need to access this memory range to obtain the expected MMIO results. Note that to trigger the test point of blocking the pipeline, you may need to initiate requests continuously.
Here is a simple test case:
# import CacheWrapper here
def mmio_test(cache: CacheWrapper):
mmio_lb = 0x30000000
mmio_rb = 0x30001000
print("\n[MMIO Test]: Start MMIO Serial Test")
for addr in range(mmio_lb, mmio_rb, 16):
addr &= ~(0xf)
addr1 = addr
addr2 = addr + 4
addr3 = addr + 8
cache.trigger_read_req(addr1)
cache.trigger_read_req(addr2)
cache.trigger_read_req(addr3)
cache.recv()
cache.recv()
cache.recv()
print("[MMIO Test]: Finish MMIO Serial Test")
5 Collect test results:
'''
In tb_cache.py
'''
# import packages here
class TestCache():
def setup_class(self):
color.print_blue("\nCache Test Start")
self.dut = DUTCache("libDPICache.so")
self.dut.init_clock("clock")
# Init here
# ...
self.testlist = ["mmio_serial"]
def teardown_class(self):
self.dut.finalize()
color.print_blue("\nCache Test End")
def __reset(self):
# Reset cache and devices
# MMIO Test
def test_mmio(self):
if ("mmio_serial" in self.testlist):
# Run test
from ..test.test_mmio import mmio_test
mmio_test(self.cache, self.ref_cache)
else:
print("\nmmio test is not included")
def run(self):
self.setup_class()
# test
self.test_mmio()
self.teardown_class()
pass
if __name__ == "__main__":
tb = TestCache()
tb.run()
Run:
python3 tb_cache.py
The above is only a rough execution process, for details refer to:Nutshell Cache Verify。
6 Evaluate the running results
After the run is complete, the following data can be obtained:
Line coverage:
Functional coverage:
It can be seen that the preset MMIO functions are all covered and correctly triggered.
5 - Verification Report
After we complete the DUT verification, writing a verification report is a crucial step. This section will provide an overview of the structure of the verification report and the content that needs to be covered.
The verification report is a review of the entire verification process and an important supporting document for determining the reasonableness of the verification. Generally, the verification report should include the following content:
- Basic document information (author, log, version, etc.)
- Verification object (verification target)
- Introduction to functional points
- Verification plan
- Breakdown of test points
- Test cases
- Test environment
- Result analysis
- Defect analysis
- Verification conclusion
The following content provides further explanation of the list, with specific examples available innutshell_cache_report_demo.pdf
1. Basic Information
Including author, log, version, date, etc.
2. Verification object (verification target)
A necessary introduction to your verification object, which may include its structure, basic functions, interface information, etc.
3. Introduction to functional points
By reading the design documents or source code, you need to summarize the target functions of the DUT and break them down into various functional points.
4. Verification plan
Including your planned verification process and verification framework. Additionally, you should explain how each part of your framework works together.
5. Breakdown of test points
Proposed testing methods for the functional points. Specifically, it can include what signal output should be observed under certain signal inputs.
6. Test cases
The specific implementation of the test points. A test case can include multiple test points.
7. Test environment
Including hardware information, software version information, etc.
8. Result analysis
Result analysis generally refers to coverage analysis. Typically, two types of coverage should be considered:
1. Line Coverage: How many RTL lines of code are executed in the test cases. Generally, we require line coverage to be above 98%.
2. Functional Coverage:Determine whether the extracted functional points are covered and correctly triggered based on the relevant signals. We generally require test cases to cover each functional point.
9. Defect analysis
Analyze the defects present in the DUT. This can include the specification and detail of the design documents, the correctness of the DUT functions (whether there are bugs), and whether the DUT functions can be triggered.
10. Verification conclusion
The final conclusion drawn after completing the chip verification process, summarizing the above content.