ankit chouhan - Academia.edu (original) (raw)

ankit chouhan

Related Authors

Steven Pinker

Maurizio Forte

Armando Marques-Guedes

Fabio Cuzzolin

Roshan Chitrakar

Lev Manovich

Lev Manovich

Graduate Center of the City University of New York

Prof. Dr. Alison McNamara

PALIMOTE JUSTICE

muhammad faisal

Tâm Hữu

HO CHI MINH CITY UNIVERSITY OF INDUSTRY

Uploads

Papers by ankit chouhan

Research paper thumbnail of 8-Bit Arithmetic and Logic Unit Design using Mixed Type of Modeling in VHDL

This paper explains the design and implementation of 8-bit ALU (arithmetic and logic unit) using ... more This paper explains the design and implementation of 8-bit ALU (arithmetic and logic unit) using VHDL by using mixed style of modeling in Xilinx ISE 8.1i. The ALU takes two8-bits numbers and performs different principal arithmetic and logic operations like addition, multiplication, logical AND, OR, XOR, XNOR, NOR. The major focus of concern in this ALU is the multiplication operation using radix-4 booth algorithm and bit-pair recording technique which increases the speed of multiplication operation. We had followed modular programming approach so that our ALU is sub divided into smaller logical block. All the modules in arithmetic and logical unit design are realized using VHDL design. The top level design consists of arithmetic unit and logic unit which is implemented by using mixed type of modeling. Designing of this ALU is done by using VHDL and simulated using Xilinx ISE 8.1i [1].

Research paper thumbnail of Implementation of an Efficient Multiplier based on Vedic Mathematics Using High speed adder

A high speed controller or processor depends vastly on the multiplier as it is one of the main ha... more A high speed controller or processor depends vastly on the multiplier as it is one of the main hardware blocks in most digital signal processing unit as well as in general processors. This paper presents a high speed Vedic multiplier architecture which is quite different from the Conventional Vedic multiplier. The most significant aspect of the proposed method is that, the developed multiplier architecture uses Carry look ahead adder as a key block for fast addition. Using Carry look ahead adder the performance of multiplier is vastly improved. This also gives chances to break whole design into smaller blocks and use it whenever required. So by using structural modeling we can easily make large design by using small design and thus complexity gets reduced for inputs of larger no of bits. We had written code for proposed new Vedic multiplier using VHDL (Very High Speed Integrated Circuits Hardware Description Language), synthesized and simulated using XilinxISE8.1i and downloaded to ...

Research paper thumbnail of Alpha-Net: Architecture, Models, and Applications

arXiv (Cornell University), Jun 27, 2020

Deep learning network training is usually computationally expensive and intuitively complex. We p... more Deep learning network training is usually computationally expensive and intuitively complex. We present a novel network architecture for custom training and weight evaluations. We reformulate the layers as ResNet-similar blocks with certain inputs and outputs of their own, the blocks (called Alpha blocks) on their connection configuration form their own network, combined with our novel loss function and normalization function form the complete Alpha-Net architecture. We provided empirical mathematical formulation of network loss function for more understanding of accuracy estimation and further optimizations. We implemented Alpha-Net with 4 different layer configurations to express the architecture behavior comprehensively. On a custom dataset based on ImageNet benchmark we evaluate Alpha-Net v1, v2, v3, and v4 for image recognition to give accuracy of 78.2%, 79.1%, 79.5%, and 78.3% respectively. The Alpha-Net v3 gives an improved accuracy of approx. 3% over last state-of-the-art network ResNet 50 on ImageNet benchmark. We also present analysis on our dataset with 256, 512, and 1024 layers and different versions of the loss function. The input representation is also very crucial for training as initial preprocessing will take only a handful of features to make training less complex than it needed to be. We also compared network behaviour with different layer structures, different loss functions, and different normalization functions for better quantitative modeling of Alpha-Net.

Research paper thumbnail of 8-Bit Arithmetic and Logic Unit Design using Mixed Type of Modeling in VHDL

This paper explains the design and implementation of 8-bit ALU (arithmetic and logic unit) using ... more This paper explains the design and implementation of 8-bit ALU (arithmetic and logic unit) using VHDL by using mixed style of modeling in Xilinx ISE 8.1i. The ALU takes two8-bits numbers and performs different principal arithmetic and logic operations like addition, multiplication, logical AND, OR, XOR, XNOR, NOR. The major focus of concern in this ALU is the multiplication operation using radix-4 booth algorithm and bit-pair recording technique which increases the speed of multiplication operation. We had followed modular programming approach so that our ALU is sub divided into smaller logical block. All the modules in arithmetic and logical unit design are realized using VHDL design. The top level design consists of arithmetic unit and logic unit which is implemented by using mixed type of modeling. Designing of this ALU is done by using VHDL and simulated using Xilinx ISE 8.1i [1].

Research paper thumbnail of Implementation of an Efficient Multiplier based on Vedic Mathematics Using High speed adder

A high speed controller or processor depends vastly on the multiplier as it is one of the main ha... more A high speed controller or processor depends vastly on the multiplier as it is one of the main hardware blocks in most digital signal processing unit as well as in general processors. This paper presents a high speed Vedic multiplier architecture which is quite different from the Conventional Vedic multiplier. The most significant aspect of the proposed method is that, the developed multiplier architecture uses Carry look ahead adder as a key block for fast addition. Using Carry look ahead adder the performance of multiplier is vastly improved. This also gives chances to break whole design into smaller blocks and use it whenever required. So by using structural modeling we can easily make large design by using small design and thus complexity gets reduced for inputs of larger no of bits. We had written code for proposed new Vedic multiplier using VHDL (Very High Speed Integrated Circuits Hardware Description Language), synthesized and simulated using XilinxISE8.1i and downloaded to ...

Research paper thumbnail of Alpha-Net: Architecture, Models, and Applications

arXiv (Cornell University), Jun 27, 2020

Deep learning network training is usually computationally expensive and intuitively complex. We p... more Deep learning network training is usually computationally expensive and intuitively complex. We present a novel network architecture for custom training and weight evaluations. We reformulate the layers as ResNet-similar blocks with certain inputs and outputs of their own, the blocks (called Alpha blocks) on their connection configuration form their own network, combined with our novel loss function and normalization function form the complete Alpha-Net architecture. We provided empirical mathematical formulation of network loss function for more understanding of accuracy estimation and further optimizations. We implemented Alpha-Net with 4 different layer configurations to express the architecture behavior comprehensively. On a custom dataset based on ImageNet benchmark we evaluate Alpha-Net v1, v2, v3, and v4 for image recognition to give accuracy of 78.2%, 79.1%, 79.5%, and 78.3% respectively. The Alpha-Net v3 gives an improved accuracy of approx. 3% over last state-of-the-art network ResNet 50 on ImageNet benchmark. We also present analysis on our dataset with 256, 512, and 1024 layers and different versions of the loss function. The input representation is also very crucial for training as initial preprocessing will take only a handful of features to make training less complex than it needed to be. We also compared network behaviour with different layer structures, different loss functions, and different normalization functions for better quantitative modeling of Alpha-Net.

Log In