Real-time fast learning hardware implementation (original) (raw)
Related papers
Design and implementation of deep neural network hardware chip and its performance analysis
2024
The artificial neural network (ANN) with a single layer has a limited capacity to process data. Multiple neurons are connected in the human brain, and the actual capacity of the brain lies in the interconnectedness of multiple neurons. As a specified generalization of ANN deep learning makes use of two or more hidden layers, which implies that a greater number of neurons are required to construct the model. A network that has more than one hidden layer, also known as two or more hidden layers, is referred to as a deep neural network, and the process of training such networks is referred to as deep learning. The research article focuses on the design of a multilayer or deep neural network presented for the target field programmable gate array (FPGA) device spartan-6 (xc6stx4-2t9g144) FPGA. The simulation is carried out using Xilinx ISE and ModelSim software. There are two hidden layers in which (2×1) multiplexer blocks are utilized for processing twenty neurons into the output of ten neurons in the first hidden layer and demultiplexers (1×2) and vice versa. The hardware utilization is estimated on FPGA to compute the performance of the deep neural hardware chip based on memory, flip flops, delay, and frequency parameters. The design is scalable and applicable to various FPGA devices, which makes the work novel. FPGA-based neuromorphic hardware acceleration platform with high speed and low power for discrete spike processing on hardware with great real-time performance.
Design of Convolutional Neural Network Based on FPGA
WSEAS TRANSACTIONS ON SIGNAL PROCESSING, 2022
Recently with the rapid development of artificial intelligence AI, various deep learning algorithms represented by Convolutional Neural Networks (CNN) have been widely utilized in various fields, showing their unique advantages; especially in Skin Cancer (SC) imaging Neural networks (NN) are methods for performing machine learning (ML) and reside in what's called deep learning (DL). DL refers to the utilization of multiple layers during a neural network to perform the training and classification of data. The Convolutional Neural Networks (CNNs), a kind of neural network and a prominent machine learning algorithm go through multiple phases before they get implemented in hardware to perform particular tasks for a specific application. State-of-the-art CNNs are computationally intensive, yet their parallel and modular nature make platforms like Field Programmable Gate Arrays (FPGAs) compatible with the acceleration process. The objective of this paper is to implement a hardware arc...
A Review of the Optimal Design of Neural Networks Based on FPGA
Applied Sciences
Deep learning based on neural networks has been widely used in image recognition, speech recognition, natural language processing, automatic driving, and other fields and has made breakthrough progress. FPGA stands out in the field of accelerated deep learning with its advantages such as flexible architecture and logic units, high energy efficiency ratio, strong compatibility, and low delay. In order to track the latest research results of neural network optimization technology based on FPGA in time and to keep abreast of current research hotspots and application fields, the related technologies and research contents are reviewed. This paper introduces the development history and application fields of some representative neural networks and points out the importance of studying deep learning technology, as well as the reasons and advantages of using FPGA to accelerate deep learning. Several common neural network models are introduced. Moreover, this paper reviews the current mainstr...