This verification hands-on task will help in learning the design of the IFU top module in Xiangshan’s Kunming Lake architecture. Through hands-on verification, you will gain insights into the design ideas of Xiangshan’s instruction fetch module and deepen your understanding of the RISC-V instruction set architecture. We welcome you to sign up for this task (registration questionnaire here, QQ group: 600480230).

The Instruction Fetch Unit (IFU) is responsible for receiving fetch requests from the Fetch Target Queue (FTQ) and retrieving instruction cache lines from either the ICache or the instruction uncache. It performs multiple functions including instruction fetch, pre-decoding, RVC-to-RVI instruction expansion, and preliminary branch prediction error checking. Ultimately, it outputs preliminary decoded information and instruction codes to the Instruction Buffer (IBuffer), and writes back verification results to the FTQ.

This task is the verification of IFU top module. All tasks will be assigned through issue of UnityChipForXiangShan.

Participation

This verification task should be completed based on the UnityChipForXiangShan verification framework, and a PR should be submitted to the repository.

Way to submit

Please fork the UnityChipForXiangShan repository, then complete the verification code and documentation. Once everything is ready, submit a pull request (PR) to contribute your work.

Bug Reporting

Please use the bug report template to file an issue in the UnityChipForXiangShan repository or click here.

When submitting a bug, first select the bug need to confirm label. Then choose one of the four bug severity labels (minor, normal, serious, or critical) that best matches your issue. Finally, select the module where you found the bug. For this verification task focusing on the IFU module, please uniformly apply the ut_frontend.ifu label.

Deliverable Requirements

This verification task should be completed based on the UnityChipForXiangShan verification framework, and a PR should be submitted to the repository. Each subtask requires the following deliverables:

  1. Verification Environment + API: The verification environment and API are code deliverables that encapsulate the data responsibilities (pins) and behavioral responsibilities (logic) of the DUT. They should provide reusable interfaces, test suites, and definitions for test coverage. Refer to the Verification Environment Setup Guide and the test coverage documentation :Line CoverageFunctional Coverage.

  2. Test Cases: Test cases are code deliverables that define input combinations for testing and their expected outputs. Refer to the Test Case Guide

  3. Verification Report: The verification report is a written deliverable that includes an introduction to the environment, test points, and test cases, as well as the environment and commands required to reproduce the code. It should also include metrics such as test coverage. Refer to the Verification Report Guide.

The UnityChipForXiangShan verification framework provides a preliminary environment for this task, but participants may also set up their own verification environment and refine APIs.

Task Difficulty

The task difficulty is determined based on a combination of factors such as complexity of understanding and workload. Generally, tasks rated 1 to 3 are considered easy. Tasks rated 4 to 7 are of moderate difficulty—these may involve a larger workload or require some effort to understand hidden details. Tasks rated 8 to 10 are relatively difficult, typically involving both a significant workload and high cognitive complexity.

Reward Information

Ultimately, based on the difficulty of the task and each participant’s performance, you will receive a corresponding amount of bonus. Additionally, if you discover and report a confirmed bug in the IFU top module, you may be eligible for an extra bonus.

Task Details

Since this round focuses on verifying the IFU top module, it will not be divided into multiple submodules. Instead, the entire IFU top module needs to be verified as a whole. We will assess the results based on the completeness of testing (e.g., coverage metrics). During the verification process, you are required to define your own test points based on selected functional features. For a detailed description of the functional features, please refer to the IFU Top Documentation. The full list of features and their corresponding difficulty levels is as follows:

Feature IDFeature NameDescriptionDifficulty
IFU_RCV_REQIFU Receives FTQ Fetch RequestReceives fetch requests from the FTQ and sets the ready signal1/10
IFU_INFOSIFU Analyzes Request and Fetches Initial InstructionsIFU analyzes the FTQ fetch request and fetches instructions from the ICache2/10
IFU_PREDECODEPre-DecodingPre-decodes the fetched instructions, reconstructs them, and identifies RVC or branch types4/10
IFU_RVC_EXPANDRVC Instruction ExpansionVerifies and expands RVC instructions into RVI format after reconstruction5/10
IFU_PREDCHECKPre-CheckPerforms early checks for simple branch prediction errors4/10
IFU_REDIRECTMisprediction RedirectFlushes IFU pipeline based on pre-check results3/10
IFU_CROSS_BLOCKCross-Prediction-Block Instruction HandlingHandles RVI instructions that span across prediction blocks6/10
IFU_IBUFFEROutput to IBufferOutputs final instruction codes and pre-decoded information to the IBuffer3/10
IFU_OVERRIDE_FLUSHOverride-based Pipeline FlushFlushes pipeline based on overriding inputs from other modules3/10
IFU_WB_FTQWriteback to FTQWrites back pre-decoded info and pre-check results to the FTQ3/10
IFU_MMIOMMIO Request HandlingHandles requests originating from MMIO space using a separate logic flow8/10
IFU_FRONTEND_TRIGGERIFU Frontend BreakpointSets frontend breakpoints in the IFU and checks corresponding PCs3/10

Registration Method

Please fill out the registration form to participate in this mission. You are also welcome to join the official event QQ group: 600480230.

If you have any questions, you may contact the group owner in the QQ group or reach out to the official email of UnityChip: anxu@bosc.ac.cn.