AN 763: Intel® Arria® 10 SoC Device Design Guidelines

ID 683192
Date 5/17/2022
Public
Document Table of Contents

2.1.5. Interface Bandwidths

To identify the interface to use to move data between the HPS and FPGA fabric, you must understand the bandwidth of each interface. The following figure illustrates the peak throughput available between the HPS and FPGA fabric as well as the internal bandwidths within the HPS. This example assumes that the FPGA fabric operates at 250 MHz, the MPU operates at 1200 MHz, and the 64-bit external SDRAM operates at 1067 MHz.

For the FPGA-to-SDRAM interface, port configuration 3 is used (FPGA-to-SDRAM0 and FPGA-to-SDRAM2 are both 128 bits wide).

For abbreviations, refer to the figure in Overview of HPS Memory-Mapped Interfaces.

Figure 2. Arria 10 HPS Memory Mapped Bandwidth

Relative Latencies and Throughputs for Each HPS Interface

This table shows usages, relative latencies, and throughputs for each interface.

Interface

Transaction Use Case

Latency

Throughput

Recommended Usage Model

HPS-to-FPGA

MPU accessing memory in FPGA

Medium

Medium

Yes

HPS-to-FPGA

MPU accessing peripheral in FPGA

Medium

Very Low

No—see GUIDELINE: Avoid using the HPS-to-FPGA bridge to access peripheral registers in the FPGA from the MPU.

Lightweight HPS-to-FPGA

MPU accessing register in FPGA

Low

Low

Yes

Lightweight HPS-to-FPGA

MPU accessing memory in FPGA

Low

Very Low

No—see GUIDELINE: Avoid using the lightweight HPS-to-FPGA bridge to access memory in the FPGA from the MPU.

FPGA-to-HPS

FPGA master accessing non-cache coherent SDRAM

High

Medium

No—see GUIDELINE: Avoid using the FPGA-to-HPS bridge to access non-cache coherent SDRAM from soft logic in the FPGA.

FPGA-to-HPS

FPGA master accessing HPS on-chip RAM

Low

High

Yes

FPGA-to-HPS

FPGA master accessing HPS peripheral

Low

Low

Yes

FPGA-to-HPS

FPGA master accessing coherent memory resulting in cache miss

High

Medium

Yes

FPGA-to-HPS

FPGA master accessing coherent memory resulting in cache hit

Low

Medium-High

Yes

FPGA-to-SDRAM

FPGA master accessing SDRAM through single FPGA-to-SDRAM port

Medium

High

Yes

FPGA-to-SDRAM

FPGA masters accessing SDRAM through multiple FPGA-to-SDRAM ports

Medium

Very High

Yes

GUIDELINE: Avoid using the HPS-to-FPGA bridge to access peripheral registers in the FPGA from the MPU.

The HPS-to-FPGA bridge is optimized for bursting traffic and peripheral accesses are typically short word sized accesses of only one beat. As a result, if peripherals are accessed through the HPS-to-FPGA bridge the transaction can be stalled by other bursting traffic that is already in flight.

GUIDELINE: Avoid using the lightweight HPS-to-FPGA bridge to access memory in the FPGA from the MPU.

The lightweight HPS-to-FPGA bridge is optimized for non-bursting traffic and typically memory accesses are performed as bursts (often 32 bytes due to cache operations). As a result, if memory is accessed through the lightweight HPS-to-FPGA bridge, the throughput is limited.

GUIDELINE: Avoid using the FPGA-to-HPS bridge to access non-cache coherent SDRAM from soft logic in the FPGA.

The FPGA-to-HPS bridge is optimized for accessing non-SDRAM accesses (peripherals, on-chip RAM, ACP). As a result, accessing SDRAM directly by performing non-coherent accesses increases the latency and limits the throughput compared to accesses from FPGA-to-SDRAM ports.

GUIDELINE: Use soft logic in the FPGA (e.g. a DMA controller) to move shared data between the HPS and FPGA. Avoid using the MPU and the HPS DMA controller for this use case.

When moving shared data between the HPS and FPGA Intel® recommends to do so from the FPGA instead of moving the data using the MPU or HPS DMA controller. If the FPGA must access cache coherent data then it must access the FPGA-to-HPS bridge with the appropriate signaling to issue a cacheable transaction. If non-cache coherent data must be moved to the FPGA or HPS, a DMA engine implemented in FPGA logic can move the data, achieving the highest throughput possible. Even though the HPS includes a DMA engine internally that could move data between the HPS and FPGA, its purpose is to assist peripherals that do not master memory or provide memory to memory data movements on behalf of the MPU.

GUIDELINE: Use the HPS-to-FPGA Bridges Design Example as a starting point for designs that need to move data through the FPGA-to-HPS bridge or FPGA-to-SDRAM port.

If you own the Arria 10 SoC Development Kit, Intel offers a design example that moves data through the FPGA-to-HPS bridge and FPGA-to-SDRAM ports. The design measures the throughput of each interface under different burst sizes so that you can see how the memory bandwidth varies under different conditions.