Moeid Heidari | Saint Petersburg State Electrotechnical University "LETI" (ETU) (original) (raw)

Moeid Heidari

Related Authors

Jana Šuchmová

Computer Science & Information Technology (CS & IT) Computer Science Conference Proceedings (CSCP)

Anjia  Wang

Anjia Wang

Friedrich-Alexander-Universität Erlangen-Nürnberg

john alexander  sanabria ordonez

Dragana Makajić-Nikolić

Dipta Mohon das

Uploads

Papers by Moeid Heidari

Research paper thumbnail of Multipurpose Cloud-Based Compiler Based on Microservice Architecture and Container Orchestration

Symmetry, Sep 2, 2022

This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY

Research paper thumbnail of Optimizing regular computations based on neural networks and Graph Traversal

Procedia Computer Science, 2021

In recent days we can see that multicore computers have the ability to easily manipulate digit nu... more In recent days we can see that multicore computers have the ability to easily manipulate digit numbers however as numbers get bigger the computation becomes more complex, the reason is that the size of both CPU registers and buses is limited. As a result, the arithmetic operations such as addition, subtraction, multiplication, and division for CPU become more complex to perform. For solving the problem of how to do computation on big digit numbers, a number of algorithms have been developed. However, the existing algorithms are noticeably slow because they operate on bits individually and are designed to run over single-core computers only. In this paper, an AI model is presented that performs a computation on tokens of 8-digit numbers to assist boost the CPU computation performance.

Research paper thumbnail of Towards Optimization of Big Numbers Computation through an AI Pre-trained Model and Graph Traversal

2020 XXIII International Conference on Soft Computing and Measurements (SCM), 2020

Nowadays we see that multicore computers are able to easily manipulate digit numbers with a size ... more Nowadays we see that multicore computers are able to easily manipulate digit numbers with a size up to 64 bits however as numbers get bigger the computation becomes more complex, the reason is that the size of both CPU registers and buses are limited. As a result, the arithmetic operations such as addition, subtraction, multiplication and division for CPU become more complex to perform. For solving the problem of how to do computation on big digit numbers, a number of algorithms have been developed. However, the existing algorithms are noticeably slow because they operate on bits individually and are designed to run over single-core computers only. In this paper, an AI model is presented that performs a computation on tokens of 8 digit numbers to assist boost the CPU computation performance.

Research paper thumbnail of Mathematical Computations Based on a Pre-trained AI Model and Graph Traversal

Nowadays we see that multicore computers are able to easily manipulate digit numbers with a size ... more Nowadays we see that multicore computers are able to easily manipulate digit numbers with a size up to 64 bits however as numbers get bigger the computation becomes more complex, the reason is that the size of both CPU registers and buses are limited. As a result, the arithmetic operations such as addition, subtraction, multiplication and division for CPU become more complex to perform. For solving the problem of how to do computation on big digit numbers, a number of algorithms have been developed. However, the existing algorithms are noticeably slow because they operate on bits individually and are designed to run over single-core computers only. In this paper, an AI model is presented that performs a computation on tokens of 8 digit numbers to assist boost the CPU computation performance.

Research paper thumbnail of Using OpenMP to Optimize Model Training Process in Machine Learning Algorithms

2021 II International Conference on Neural Networks and Neurotechnologies (NeuroNT)

Research paper thumbnail of Multipurpose Cloud-Based Compiler Based on Microservice Architecture and Container Orchestration

Symmetry, Sep 2, 2022

This article is an open access article distributed under the terms and conditions of the Creative... more This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY

Research paper thumbnail of Optimizing regular computations based on neural networks and Graph Traversal

Procedia Computer Science, 2021

In recent days we can see that multicore computers have the ability to easily manipulate digit nu... more In recent days we can see that multicore computers have the ability to easily manipulate digit numbers however as numbers get bigger the computation becomes more complex, the reason is that the size of both CPU registers and buses is limited. As a result, the arithmetic operations such as addition, subtraction, multiplication, and division for CPU become more complex to perform. For solving the problem of how to do computation on big digit numbers, a number of algorithms have been developed. However, the existing algorithms are noticeably slow because they operate on bits individually and are designed to run over single-core computers only. In this paper, an AI model is presented that performs a computation on tokens of 8-digit numbers to assist boost the CPU computation performance.

Research paper thumbnail of Towards Optimization of Big Numbers Computation through an AI Pre-trained Model and Graph Traversal

2020 XXIII International Conference on Soft Computing and Measurements (SCM), 2020

Nowadays we see that multicore computers are able to easily manipulate digit numbers with a size ... more Nowadays we see that multicore computers are able to easily manipulate digit numbers with a size up to 64 bits however as numbers get bigger the computation becomes more complex, the reason is that the size of both CPU registers and buses are limited. As a result, the arithmetic operations such as addition, subtraction, multiplication and division for CPU become more complex to perform. For solving the problem of how to do computation on big digit numbers, a number of algorithms have been developed. However, the existing algorithms are noticeably slow because they operate on bits individually and are designed to run over single-core computers only. In this paper, an AI model is presented that performs a computation on tokens of 8 digit numbers to assist boost the CPU computation performance.

Research paper thumbnail of Mathematical Computations Based on a Pre-trained AI Model and Graph Traversal

Nowadays we see that multicore computers are able to easily manipulate digit numbers with a size ... more Nowadays we see that multicore computers are able to easily manipulate digit numbers with a size up to 64 bits however as numbers get bigger the computation becomes more complex, the reason is that the size of both CPU registers and buses are limited. As a result, the arithmetic operations such as addition, subtraction, multiplication and division for CPU become more complex to perform. For solving the problem of how to do computation on big digit numbers, a number of algorithms have been developed. However, the existing algorithms are noticeably slow because they operate on bits individually and are designed to run over single-core computers only. In this paper, an AI model is presented that performs a computation on tokens of 8 digit numbers to assist boost the CPU computation performance.

Research paper thumbnail of Using OpenMP to Optimize Model Training Process in Machine Learning Algorithms

2021 II International Conference on Neural Networks and Neurotechnologies (NeuroNT)

Log In