Coherent Accelerator Processor Interface

Coherent Accelerator Processor Interface
Year created 2014 (2014)
Created by
Supersedes

Coherent Accelerator Processor Interface, officially abbreviated as CAPI, is a high-speed processor expansion bus standard, initially designed to be layered on top of PCI Express, for directly connecting CPUs to external accelerators like GPUs, ASICs, FPGAs or fast storage.[1][2] It offers low latency, high speed, direct memory access connectivity between devices of different instruction set architectures.

History

The performance scaling traditionally associated with Moore's Law—dating back to 1965—began to taper off around 2004, as both Intel's Prescott architecture and IBM's Cell processor pushed toward a 4 GHz operating frequency. Here both projects ran into a thermal scaling wall, whereby heat extraction problems associated with further increases in operating frequency largely outweighed gains from shorter cycle times.

Over the decade that followed, few commercial CPU products exceeded 4 GHz, with the majority of performance improvements now coming from incrementally improved microarchitectures, better systems integration, and higher compute density—this largely in the form of packing a larger numbers of independent cores onto the same die, often at the expense of peak operating frequency (Intel's 24-core Xeon E7-8890 from June 2016 has a base operating frequency of just 2.2 GHz, so as to operate within the constraints of a single-socket 165 W power consumption and cooling budget).

Where large performance gains have been realized, it was often associated with increasingly specialized compute units, such as GPU units added to the processor die, or external GPU- or FPGA-based accelerators. In many applications, accelerators struggle with limitations of the interconnect's performance (bandwidth and latency) or with limitations due to the interconnect's architecture (such as lacking memory coherence). Especially in the datacenter, improving the interconnect became paramount in moving toward a heterogeneous architecture in which hardware becomes increasingly tailored to specific compute workloads.

CAPI was developed to enable computers to more easily and efficiently attach specialized accelerators. It was designed by IBM for use in its POWER8 based systems which came to market in 2014. At the same time, IBM and several other companies founded the OpenPOWER Foundation to build an ecosystem around POWER based technologies, including CAPI. In October 2016 several OpenPOWER partners formed the OpenCAPI Consortium together with GPU and CPU designer AMD and systems designers Dell EMC and Hewlett Packard Enterprise to spread the technology beyond the scope of OpenPOWER and IBM.[3]

Implementation

CAPI is implemented as a functional unit inside the CPU, called the Coherent Accelerator Processor Proxy (CAPP) with a corresponding unit on the accelerator called the Power Service Layer (PSL). The CAPP and PSL units acts like a cache directory so the attached device and the CPU can share the same coherent memory space, and the accelerator becomes an Accelerator Function Unit (AFU), a peer to other functional units integrated in the CPU.[4][5]

Since the CPU and AFU share the same memory space, low latency and high speeds can be achieved since the CPU doesn't have to do memory translations and memory shuffling between the CPU's main memory and the accelerator's memory spaces. An application can make use of the accelerator without specific device drivers as everything is enabled by a general CAPI kernel extension in the host operating system. The CPU and PSL han read and write directly to each others memories and registers, as demanded by the application.

CAPI is layered on top of PCIe Gen 3, using 16 PCIe lanes, and is an additional functionality for the PCIe slots on CAPI enabled systems. Usually there are designated CAPI enabled PCIe slots on such machines. Since there is only one CAPP per POWER8 processor the number of possible CAPI units are determined by the number of POWER8 processors, regardless of how many PCIe slots there are. In certain POWER8 systems, IBM makes use of dual chip modules, thus doubling the CAPI capacity per processor socket.

Traditional transactions between a PCIe device and a CPU can take around 20.000 operations, wheras a CAPI attached device will only use around 500, significantly reducing latency, and effectivally increasing bandwith due to decreased operations overhead.[5]

The total bandwidth of a CAPI port is determined by the underlaying PCIe 3.0 x16 technology, peaking at ca 16 GB/s, bidirectional.[6]

CAPI 2.0

CAPI 2.0, which will be introduced in the POWER9 processor, will make use of PCIe Gen 4, effectivelly doubling the performance to 32 GB/s.[6]

OpenCAPI

OpenCAPI, formerly New CAPI or CAPI 3.0, is not layered on top of PCIe and will therefor not use PCIe slots. In IBM's CPU POWER9 it will use the Bluelink 25G I/O facility that it shares with NVLink 2.0, peaking at 50 GB/s.[7] OpenCAPI doesn't need the PSL unit (required for CAPI 1 and 2) in the accelerator, as it's not layered on top of PCIe but uses its own transaction protocol.[8]

The technology behind OpenCAPI is governed by the OpenCAPI Consortium, founded in October 2016 by AMD, Google, IBM, Mellanox and Micron together with partners Nvidia, Hewlett Packard Enterprise, Dell EMC and Xilinx.[9]

See also

References

External links

This article is issued from Wikipedia - version of the 11/10/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.