Short hybrid floating-point formats based on 16-bit for FPGA-based arithmetic implementations (original) (raw)
Abstract
Nowadays, there are implemented devices whose purpose is to perform massive computations by saving resources at the time they reduce the latency of arithmetic operations. These devices are usually GPUs, FPGAs and other specialised devices such as "Coral". Neural networks, digital filters and numerical simulators take advantage of the massively parallel operations of such devices. One way to reduce the amount of resources used is to limit the size of the registers that store data. This has led to the proliferation of numeric formats with a length of less than 32 bits, known as short floating point or SFP. We have developed several SFP’s for use in our neural network accelerator design, allowing for different levels of accuracy. We use a 16-bit format for data transfer and different formats can be used simultaneously for internal operations. The internal operations can be performed in 16-bit, 20-bit and 24-bit. The use of registers larger than 16-bit allows the preservation ...
Herón Molina Lozano hasn't uploaded this paper.
Let Herón know you want this paper to be uploaded.
Ask for this paper to be uploaded.