Algorithms (original) (raw)
Author / Affiliation / Email
Journal Description
Algorithms is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications. Algorithms is published monthly online by MDPI. The European Society for Fuzzy Logic and Technology (EUSFLAT) is affiliated with Algorithms and their members receive discounts on the article processing charges.
- Open Access — free for readers, with article processing charges (APC) paid by authors or their institutions.
- High Visibility: indexed within Scopus, ESCI (Web of Science), Ei Compendex, and other databases.
- **Journal Rank: JCR - Q2 (Computer Science, Theory and Methods) / CiteScore - Q1 (Numerical Analysis)
- Rapid Publication: manuscripts are peer-reviewed and a first decision is provided to authors approximately 18.9 days after submission; acceptance to publication is undertaken in 3.4 days (median values for papers published in this journal in the second half of 2024).
- Testimonials: See what our editors and authors say about Algorithms.
- Recognition of Reviewers: reviewers who provide timely, thorough peer-review reports receive vouchers entitling them to a discount on the APC of their next publication in any MDPI journal, in appreciation of the work done.
Impact Factor: 1.8 (2023); 5-Year Impact Factor: 1.9 (2023)
Latest Articles
Open AccessReview
AI-Driven Optimization of Blockchain Scalability, Security, and Privacy Protection
byFujiang Yuan, Zihao Zuo, Yang Jiang, Wenzhou Shu, Zhen Tian, Chenxi Ye, Junye Yang, Zebing Mao, Xia Huang, Shaojie Gu and Yanhong Peng
Algorithms 2025, 18(5), 263; https://doi.org/10.3390/a18050263 (registering DOI) - 2 May 2025
With the continuous development of technology, blockchain has been widely used in various fields by virtue of its decentralization, data integrity, traceability, and anonymity. However, blockchain still faces many challenges, such as scalability and security issues. Artificial intelligence, with its powerful data processing [...] Read more.
With the continuous development of technology, blockchain has been widely used in various fields by virtue of its decentralization, data integrity, traceability, and anonymity. However, blockchain still faces many challenges, such as scalability and security issues. Artificial intelligence, with its powerful data processing capability, pattern recognition ability, and adaptive optimization algorithms, can improve the transaction processing efficiency of blockchain, enhance the security mechanism, and optimize the privacy protection strategy, thus effectively alleviating the limitations of blockchain in terms of scalability and security. Most of the existing related reviews explore the application of AI in blockchain as a whole but lack in-depth classification and discussion on how AI can empower the core aspects of blockchain. This paper explores the application of artificial intelligence technologies in addressing core challenges of blockchain systems, specifically in terms of scalability, security, and privacy protection. Instead of claiming a deep theoretical integration, we focus on how AI methods, such as machine learning and deep learning, have been effectively adopted to optimize blockchain consensus algorithms, improve smart contract vulnerability detection, and enhance privacy-preserving mechanisms like federated learning and differential privacy. Through comprehensive classification and discussion, this paper provides a structured overview of the current research landscape and identifies potential directions for further technical collaboration between AI and blockchain technologies.Full article
Open AccessArticle
byPanke Qin, Yongjie Ding, Ya Li, Bo Ye, Zhenlun Gao, Yaxing Liu, Zhongqi Cai and Haoran Qi
Algorithms 2025, 18(5), 262; https://doi.org/10.3390/a18050262 (registering DOI) - 2 May 2025
Financial Time Series Forecasting (TSF) remains a critical challenge in Artificial Intelligence (AI) due to the inherent complexity of financial data, characterized by strong non-linearity, dynamic non-stationarity, and multi-factor coupling. To address the performance limitations of Spiking Neural Networks (SNNs) caused by hyperparameter [...] Read more.
Financial Time Series Forecasting (TSF) remains a critical challenge in Artificial Intelligence (AI) due to the inherent complexity of financial data, characterized by strong non-linearity, dynamic non-stationarity, and multi-factor coupling. To address the performance limitations of Spiking Neural Networks (SNNs) caused by hyperparameter sensitivity, this study proposes an SNN model optimized by an Improved Cuckoo Search (ICS) algorithm (termed ICS-SNN). The ICS algorithm enhances global search capability through piecewise-mapping-based population initialization and introduces a dynamic discovery probability mechanism that adaptively increases with iteration rounds, thereby balancing exploration and exploitation. Applied to futures market price difference prediction, experimental results demonstrate that ICS-SNN achieves reductions of 13.82% in MAE, 21.27% in MSE, and 15.21% in MAPE, while improving the coefficient of determination (R2) from 0.9790 to 0.9822, compared to the baseline SNN. Furthermore, ICS-SNN significantly outperforms mainstream models such as Long Short-Term Memory (LSTM) and Backpropagation (BP) networks, reducing prediction errors by 10.8% (MAE) and 34.9% (MSE), respectively, without compromising computational efficiency. This work highlights that ICS-SNN provides a biologically plausible and computationally efficient framework for complex financial TSF, bridging the gap between neuromorphic principles and real-world financial analytics. The proposed method not only reduces manual intervention in hyperparameter tuning but also offers a scalable solution for high-frequency trading and multi-modal data fusion in future research.Full article
Open AccessArticle
A New Algorithm for Computing the Distance and the Diameter in Circulant Graphs
In the present study, we focus on circulant graphs,
Cn(S)
, with set of vertices
{0,1,…,n−1}
and in which two distinct vertices i and j are adjacent if and [...] Read more.
In the present study, we focus on circulant graphs,
Cn(S)
, with set of vertices
{0,1,…,n−1}
and in which two distinct vertices i and j are adjacent if and only if
|i−j|n∈S
, where S is a generating set. Despite their regularity, there are currently no established formulas to accurately determine the distance and the diameter of circulant graphs. In light of this context, we present in this paper a novel approach, which relies on a simple algorithm, capable of yielding formulas for the distance and the diameter of circulant graphs without implementing any graph.Full article
Open AccessArticle
byWilmer Clemente Cunuhay Cuchipe, Johnny Bajaña Zajia, Byron Oviedo and Cristian Zambrano-Vega
Efficient sales route optimization is a critical challenge in logistics and distribution, especially under real-world conditions involving traffic variability and dynamic constraints. This study proposes a novel Hybrid Genetic Algorithm (GAAM-TS) that integrates Adaptive Mutation, Tabu Search, and an LSTM-based travel time prediction [...] Read more.
Efficient sales route optimization is a critical challenge in logistics and distribution, especially under real-world conditions involving traffic variability and dynamic constraints. This study proposes a novel Hybrid Genetic Algorithm (GAAM-TS) that integrates Adaptive Mutation, Tabu Search, and an LSTM-based travel time prediction model to enable real-time, intelligent route planning. The approach addresses the limitations of traditional genetic algorithms by enhancing solution quality, maintaining population diversity, and incorporating data-driven traffic estimations via deep learning. Experimental results on real-world data from the NYC Taxi dataset show that GAAM-TS significantly outperforms both Standard GA and GA-AM variants, achieving up to 20% improvement in travel efficiency while maintaining robustness across problem sizes. Although GAAM-TS incurs higher computational costs, it is best suited for offline or batch optimization scenarios, whereas GA-AM provides a balanced alternative for near-real-time applications. The proposed methodology is applicable to last-mile delivery, fleet routing, and sales territory management, offering a scalable and adaptive solution. Future work will explore parallelization strategies and multi-objective extensions for sustainability-aware routing.Full article
Open AccessArticle
byZhongqin Xiong, Shichang Huang, Shen Ren, Yutong Lin, Zewen Li, Dongyu Li and Fangming Deng
With the present fault detection method for low-voltage distribution networks, it is difficult to detect single-phase grounding faults under complex working conditions. In this paper, a particle swarm optimization (PSO) support vector machine (SVM)-based grounding fault detection method is proposed for distribution networks. [...] Read more.
With the present fault detection method for low-voltage distribution networks, it is difficult to detect single-phase grounding faults under complex working conditions. In this paper, a particle swarm optimization (PSO) support vector machine (SVM)-based grounding fault detection method is proposed for distribution networks. By improving the inertia weight value and introducing a flight-time factor, the PSO algorithm can be improved. The parameters C and g of the SVM can be optimized based on the improved PSO algorithm. Based on the PSO-SVM-based method, a grounding fault detection method can be established. By testing the proposed model in the simulation and experiment, its effectiveness and detection accuracy is validated.Full article
Open AccessArticle
byEsteban Bravo-López, Tomás Fernández, Chester Sellers and Jorge Delgado-García
Landslides are hazardous events that occur mainly in mountainous areas and cause substantial losses of various kinds worldwide; therefore, it is important to investigate them. In this study, a specific Machine Learning (ML) method was further analyzed due to the good results obtained [...] Read more.
Landslides are hazardous events that occur mainly in mountainous areas and cause substantial losses of various kinds worldwide; therefore, it is important to investigate them. In this study, a specific Machine Learning (ML) method was further analyzed due to the good results obtained in the previous stage of this research. The algorithm implemented is Extreme Gradient Boosting (XGBoost), which was used to evaluate the susceptibility to landslides recorded in the city of Cuenca (Ecuador) and its surroundings, generating the respective Landslide Susceptibility Maps (LSM). For the model implementation, a landslide inventory updated to 2019 was used and several sets from 15 available conditioning factors were considered, applying two different methods of random point sampling. Additionally, a hyperparameter tuning process of XGBoost has been employed in order to optimize the predictive and computational performance of each model. The results obtained were validated using AUC-ROC, F-Score and the degree of landslide coincidence adjustment at high and very high susceptibility levels, showing a good predictive capacity in most cases. The best results were obtained with the set of the six best conditioning factors previously determined, as it produced good values in validation metrics (AUC = 0.83; F-Score = 0.73) and a degree of coincidence of landslides in the high and very high susceptibility levels above 90%. The Wilcoxon text led to establishing significant differences between methods. These results show the need to perform susceptibility analyses with different data sets to determine the most appropriate ones.Full article
Open AccessArticle
Early Risk Prediction in Acute Aortic Syndrome on Clinical Data Using Machine Learning
byMehdi Tavafi, Kalpdrum Passi and Robert Ohle
This study explores machine learning’s potential for early Acute Aortic Syndrome (AAS) prediction by integrating and cleaning extensive clinical datasets from 68 emergency departments in the USA, covering the medical histories of nearly 150,000 patients from 2021 to 2022. Utilizing various data-splitting strategies [...] Read more.
This study explores machine learning’s potential for early Acute Aortic Syndrome (AAS) prediction by integrating and cleaning extensive clinical datasets from 68 emergency departments in the USA, covering the medical histories of nearly 150,000 patients from 2021 to 2022. Utilizing various data-splitting strategies and classifiers, the research constructs predictive models and addresses dataset size limitations, achieving an exceptional accuracy of 99.3% with the Relief feature method and random forest classifier, facilitating further research on AAS and other cardiovascular diseases.Full article
Open AccessArticle
Simulating Intraday Electricity Consumption with ForGAN
byRalf Korn and Laurena Ramadani
Sparse data and an unknown conditional distribution of future values are challenges for managing risks inherent in the evolution of time series. This contribution addresses both aspects through the application of ForGAN, a special form of a generative adversarial network (GAN), to German [...] Read more.
Sparse data and an unknown conditional distribution of future values are challenges for managing risks inherent in the evolution of time series. This contribution addresses both aspects through the application of ForGAN, a special form of a generative adversarial network (GAN), to German electricity consumption data. Electricity consumption time series have been selected due to their typical combination of (non-linear) seasonal behavior on different time scales and of local random effects. The primary objective is to demonstrate that ForGAN is able to capture such complicated seasonal figures and to generate data with the correct underlying conditional distribution without data preparation, such as de-seasonalization. In particular, ForGAN does so without assuming an underlying model for the evolution of the time series and is purely data-based. The training and validation procedures are described in great detail. Specifically, a long iteration process of the interplay between the generator and discriminator is required to obtain convergence of the parameters that determine the conditional distribution from which additional artificial data can be generated. Additionally, extensive quality assessments of the generated data are conducted by looking at histograms, auto-correlation structures, and further features comparing the real and the generated data. As a result, the generated data match the conditional distribution of the next consumption value of the training data well. Thus, the trained generator of ForGAN can be used to simulate additional time series of German electricity consumption. This can be seen as a kind of proof for the applicabilty of ForGAN. Through a detailed descriptions of the necessary steps of training and validation procedures, a detailed quality check before the actual use of the simulated data, and by providing the intuition and mathematical background behind ForGAN, this contribution aims to demystify the application of GANs to motivate both theorists and researchers in applied sciences to use them for data generation in similar applications. The proposed framework has laid out a plan for doing so.Full article
Open AccessSystematic Review
byAleksander Dabek, Lorenzo Mantovani, Susanna Mirabella, Michele Vignati and Simone Cinquemani
This paper provides a comprehensive overview of the state of the art non-destructive methods for detecting plant biochemical traits through spectral imaging of leafy greens. It offers insights into the various detection techniques and their effectiveness. The review emphasizes the algorithms used for [...] Read more.
This paper provides a comprehensive overview of the state of the art non-destructive methods for detecting plant biochemical traits through spectral imaging of leafy greens. It offers insights into the various detection techniques and their effectiveness. The review emphasizes the algorithms used for spectral data analysis, highlighting advancements in computational methods that have contributed to improving detection accuracy and efficiency. This systematic review, following the PRISMA 2020 guidelines, explores the applications of non-destructive measurements, techniques, and algorithms, including hyperspectral imaging and spectrometry for detecting a wide range of chemical compounds and elements in lettuce, basil, and spinach. This review focuses on studies published from 2019 onward, focusing on the detection of compounds such as chlorophyll, carotenoids, nitrogen, nitrate, and anthocyanin. Additional compounds such as phosphorus, vitamin C, magnesium, glucose, sugar, water content, calcium, soluble solid content, sulfur, and pH are also mentioned, although they were not the primary focus of this study. The techniques used are showcased and highlighted for each compound, and the accuracies achieved are presented to demonstrate effective detection.Full article
Open AccessArticle
byIvan Dimov and Rayna Georgieva
Many important practical problems connected to energy efficiency in buildings, ecology, metallurgy, the development of wireless communication systems, the optimization of radar technology, quantum computing, pharmacology, and seismology are described by large-scale mathematical models that are typically represented by systems of partial differential [...] Read more.
Many important practical problems connected to energy efficiency in buildings, ecology, metallurgy, the development of wireless communication systems, the optimization of radar technology, quantum computing, pharmacology, and seismology are described by large-scale mathematical models that are typically represented by systems of partial differential equations. Such systems often involve numerous input parameters. It is crucial to understand how susceptible the solutions are to uncontrolled variations or uncertainties within these input parameters. This knowledge helps in identifying critical factors that significantly influence the model’s outcomes and can guide efforts to improve the accuracy and reliability of predictions. Sensitivity analysis (SA) is a method used efficiently to assess the sensitivity of the output results from large-scale mathematical models to uncertainties in their input data. By performing SA, we can better manage risks associated with uncertain inputs and make more informed decisions based on the model’s outputs. In recent years, researchers have developed advanced algorithms based on the analysis of variance (ANOVA) technique for computing numerical sensitivity indicators. These methods have also incorporated computationally efficient Monte Carlo integration techniques. This paper presents a comprehensive theoretical and experimental investigation of Monte Carlo algorithms based on “symmetrized shaking” of Sobol’s quasi-random sequences. The theoretical proof demonstrates that these algorithms exhibit an optimal rate of convergence for functions with continuous and bounded first derivatives and for functions with continuous and bounded second derivatives, respectively, both in terms of probability and mean square error. For the purposes of numerical study, these approaches were successfully applied to a particular problem. A specialized software tool for the global sensitivity analysis of an air pollution mathematical model was developed. Sensitivity analyses were conducted regarding some important air pollutant levels, calculated using a large-scale mathematical model describing the long-distance transport of air pollutants—the Unified Danish Eulerian Model (UNI-DEM). The sensitivity of the model was explored focusing on two distinct categories of key input parameters: chemical reaction rates and input emissions. To validate the theoretical findings and study the applicability of the algorithms across diverse problem classes, extensive numerical experiments were conducted to calculate the main sensitivity indicators—Sobol’ global sensitivity indices. Various numerical integration algorithms were employed to meet this goal—Monte Carlo, quasi-Monte Carlo (QMC), scrambled quasi-Monte Carlo methods based on Sobol’s sequences, and a sensitivity analysis approach implemented in the SIMLAB software for sensitivity analysis. During the study, an essential task arose that is small in value sensitivity measures. It required numerical integration approaches with higher accuracy to ensure reliable predictions based on a specific mathematical model, defining a vital role for small sensitivity measures. Both the analysis and numerical results highlight the advantages of one of the proposed approaches in terms of accuracy and efficiency, particularly for relatively small sensitivity indices.Full article
Open AccessArticle
Integrated Model Selection and Scalability in Functional Data Analysis Through Bayesian Learning
byWenzheng Tao, Sarang Joshi and Ross Whitaker
Functional data, including one-dimensional curves and higher-dimensional surfaces, have become increasingly prominent across scientific disciplines. They offer a continuous perspective that captures subtle dynamics and richer structures compared to discrete representations, thereby preserving essential information and facilitating the more natural modeling of real-world [...] Read more.
Functional data, including one-dimensional curves and higher-dimensional surfaces, have become increasingly prominent across scientific disciplines. They offer a continuous perspective that captures subtle dynamics and richer structures compared to discrete representations, thereby preserving essential information and facilitating the more natural modeling of real-world phenomena, especially in sparse or irregularly sampled settings. A key challenge lies in identifying low-dimensional representations and estimating covariance structures that capture population statistics effectively. We propose a novel Bayesian framework with a nonparametric kernel expansion and a sparse prior, enabling the direct modeling of measured data and avoiding the artificial biases from regridding. Our method, Bayesian scalable functional data analysis (BSFDA), automatically selects both subspace dimensionalities and basis functions, reducing the computational overhead through an efficient variational optimization strategy. We further propose a faster approximate variant that maintains comparable accuracy but accelerates computations significantly on large-scale datasets. Extensive simulation studies demonstrate that our framework outperforms conventional techniques in covariance estimation and dimensionality selection, showing resilience to high dimensionality and irregular sampling. The proposed methodology proves effective for multidimensional functional data and showcases practical applicability in biomedical and meteorological datasets. Overall, BSFDA offers an adaptive, continuous, and scalable solution for modern functional data analysis across diverse scientific domains.Full article
Open AccessArticle
Detecting and Analyzing Botnet Nodes via Advanced Graph Representation Learning Tools
byAlfredo Cuzzocrea, Abderraouf Hafsaoui and Carmine Gallo
Private consumers, small businesses, and even large enterprises are all at risk from botnets. These botnets are known for spearheading Distributed Denial-Of-Service (DDoS) attacks, spamming large populations of users, and causing critical harm to major organizations. The development of Internet of Things (IoT) [...] Read more.
Private consumers, small businesses, and even large enterprises are all at risk from botnets. These botnets are known for spearheading Distributed Denial-Of-Service (DDoS) attacks, spamming large populations of users, and causing critical harm to major organizations. The development of Internet of Things (IoT) devices led to the use of these devices for cryptocurrency mining, in-transit data interception, and sending logs containing private data to the master botnet. Different techniques were developed to identify these botnet activities, but only a few use Graph Neural Networks (GNNs) to analyze host activity by representing their communications with a directed graph. Although GNNs are intended to extract structural graph properties, they risk causing overfitting, which leads to failure when attempting to do so from an unidentified network. In this study, we test the notion that structural graph patterns might be used for efficient botnet detection. In this study, we also present SIR-GN, a structural iterative representation learning methodology for graph nodes. Our approach is built to work well with untested data, and our model is able to provide a vector representation for every node that captures its structural information. Finally, we demonstrate that, when the collection of node representation vectors is incorporated into a neural network classifier, our model outperforms the state-of-the-art GNN-based algorithms in the detection of bot nodes within unknown networks.Full article
Open AccessArticle
byYonglin Li, Zhao Liu, Changtao Kan, Rongfei Qiao, Yue Yu and Changgang Li
Amid global decarbonization mandates, urban distribution networks (UDNs) face escalating voltage volatility due to proliferating distributed energy resources (DERs) and emerging loads (e.g., 5G base stations and data centers). While virtual power plants (VPPs) and network reconfiguration mitigate operational risks, extant methods inadequately [...] Read more.
Amid global decarbonization mandates, urban distribution networks (UDNs) face escalating voltage volatility due to proliferating distributed energy resources (DERs) and emerging loads (e.g., 5G base stations and data centers). While virtual power plants (VPPs) and network reconfiguration mitigate operational risks, extant methods inadequately model load flexibility and suffer from algorithmic stagnation in non-convex optimization. This study proposes a proactive voltage control framework addressing these gaps through three innovations. First, a dynamic cyber-physical load model quantifies 5G/data centers’ demand elasticity as schedulable VPP resources. Second, an Improved Termite Life Cycle Optimizer (ITLCO) integrates chaotic initialization and quantum tunneling to evade local optima, enhancing convergence in high-dimensional spaces. Third, a hierarchical control architecture coordinates the VPP reactive dispatch and topology adaptation via mixed-integer programming. The effectiveness and economic viability of the proposed strategy are validated through multi-scenario simulations of the modified IEEE 33-bus system (represented by 12.66 kV, it is actually oriented to a broader voltage scene). These advancements establish a scalable paradigm for UDNs to harness DERs and next-gen loads while maintaining grid stability under net-zero transitions. The methodology bridges theoretical gaps in flexibility modeling and metaheuristic optimization, offering utilities a computationally efficient tool for real-world implementation.Full article
Open AccessArticle
Assigning Candidate Tutors to Modules: A Preference Adjustment Matching Algorithm (PAMA)
byNikos Karousos, Despoina Pantazi, George Vorvilas and Vassilios S. Verykios
Matching problems arise in various settings where two or more entities need to be matched—such as job applicants to positions, students to colleges, organ donors to recipients, and advertisers to ads slots in web advertising platforms. This study introduces the preference adjustment matching [...] Read more.
Matching problems arise in various settings where two or more entities need to be matched—such as job applicants to positions, students to colleges, organ donors to recipients, and advertisers to ads slots in web advertising platforms. This study introduces the preference adjustment matching algorithm (PAMA), a novel matching framework that pairs elements, which conceptually represent a bipartite graph structure, based on rankings and preferences. In particular, this algorithm is applied to tutor–module assignment in academic settings, and the methodology is built on four key assumptions where each module must receive its required number of candidates, candidates can only be assigned to a module once, eligible candidates based on ranking and module capacity must be assigned, and priority is given to mutual first-preference matches with institutional policies guiding alternative strategies when needed. PAMA operates in iterative rounds, dynamically adjusting modules and tutors’ preferences while addressing capacity and eligibility constraints. The distinctive innovative element of PAMA is that it combines concepts of maximal and stable matching, pending status and deadlock resolution into a single process for matching tutors to modules to meet the specific requirements of academic institutions and their constraints. This approach achieves balanced assignments by adhering to ranking order and considering preferences on both sides (tutors and institution). PAMA was applied to a real dataset provided by the Hellenic Open University (HOU), in which 3982 tutors competed for 1906 positions within 620 modules. Its performance was tested through various scenarios and proved capable of effectively handling both single-round and multi-round assignments. PAMA effectively handles complex cases, allowing policy-based resolution of deadlocks. While it may lose maximality in such instances, it converges to stability, offering a flexible solution for matching-related problems.Full article
Open AccessArticle
A Method for Synthesizing Ultra-Large-Scale Clock Trees
byZiheng Li, Benyuan Chen, Wanting Wang, Hui Lv, Qinghua Lv, Jie Chen, Yan Wang, Juan Li and Cheng Zhang
As integrated circuit technology continues to advance, clock tree synthesis has become increasingly significant in the design of ultra-large-scale integrated circuits. Traditional clock tree synthesis methods often face challenges such as insufficient computational resources and buffer fan-out limitations when dealing with ultra-large-scale clock [...] Read more.
As integrated circuit technology continues to advance, clock tree synthesis has become increasingly significant in the design of ultra-large-scale integrated circuits. Traditional clock tree synthesis methods often face challenges such as insufficient computational resources and buffer fan-out limitations when dealing with ultra-large-scale clock trees. To address this issue, this paper proposes an improved clock tree synthesis algorithm called incomplete balanced KSR (IB-KSR). Building upon the KSR algorithm, this proposed algorithm efficiently reduces the consumption of computational resources and constrains the fan-out of each buffer by incorporating incomplete minimum spanning tree (IMST) technology and a clustering strategy grounded in Balanced Split. In experiments, the IB-KSR algorithm was compared with the GSR algorithm. The results indicated that IB-KSR reduced the global skew of the clock tree by 43.4% and decreased the average latency by 34.3%. Furthermore, during program execution, IB-KSR maintained low computational resource consumption.Full article
Open AccessArticle
Development of Optimal Size Range of Modules for Driving of Automatic Sliding Doors
byIvo Malakov, Velizar Zaharinov and Hasan Hasansabri
The article is dedicated to the choice of an optimal size range of modules driving automatic sliding doors. The optimal size range is a compromise between the conflicting interests of manufacturers and users. The problem is particularly relevant, since the product is widely [...] Read more.
The article is dedicated to the choice of an optimal size range of modules driving automatic sliding doors. The optimal size range is a compromise between the conflicting interests of manufacturers and users. The problem is particularly relevant, since the product is widely used in the construction sector, but there are no developments for scientifically sound determination of the elements of the range. Most often in practice, one oversized module is used for all doors, regardless of the conditions of the specific problem. This leads to an increase in the production costs and operating costs. Size range optimization will lead to increase in the competitiveness of the manufactured products and the efficiency of their application. To solve the problem, a developed approach is used, composed of several stages: determining the main parameter of the product; market demand study; selection of an optimality criterion—the total costs for production and operation; determining the functional dependence between the costs and the influencing factors; and building a mathematical model of the problem. Based on a known optimization method, recurrent dependencies for calculating the total costs have been derived. Utilizing developed algorithms and software application, the optimal size range is determined.Full article
Open AccessArticle
byMeng Zhou, Xiaoyi Zhou, Zhijian Li, Xinyue Liu and Chengming Chen
Fatigue driving is one of the crucial factors causing traffic accidents. Most existing fatigue driving detection algorithms overlook individual driver characteristics, potentially leading to misjudgments. This article presents a novel detection algorithm that utilizes facial multi-feature fusion, thoroughly considering the driver’s individual characteristics. [...] Read more.
Fatigue driving is one of the crucial factors causing traffic accidents. Most existing fatigue driving detection algorithms overlook individual driver characteristics, potentially leading to misjudgments. This article presents a novel detection algorithm that utilizes facial multi-feature fusion, thoroughly considering the driver’s individual characteristics. To improve the judging accuracy of the driver’s facial expressions, a personalized threshold is proposed based on the normalization of the driver’s eyes and mouth opening and closing instead of the traditional average threshold, as individual drivers have different eye and mouth sizes. Given the dynamic changes in fatigue level, a sliding window model is designed for further calculating blinking duration ratio (BF), yawning frequency (YF), and nodding frequency (NF), and these evaluation indexes are used in the feature fusion model. The reliability of the algorithm is verified by the actual test results, which show that the detection accuracy reaches 95.6% and shows good application potential in fatigue detection applications. In this way, facial multi-feature fusion and fully considering the driver’s individual characteristics makes fatigue driving detection more accurate.Full article
Open AccessArticle
Phase Plane Trajectory Planning for Double Pendulum Crane Anti-Sway Control
byKai Zhang, Wangqing Niu and Kailun Zhang
In view of the double pendulum characteristics of cranes in actual production, simply equating them to single pendulum characteristics and ignoring the mass of the hook will lead to significant errors in the oscillation frequency. To tackle this issue, an input-shaping double pendulum [...] Read more.
In view of the double pendulum characteristics of cranes in actual production, simply equating them to single pendulum characteristics and ignoring the mass of the hook will lead to significant errors in the oscillation frequency. To tackle this issue, an input-shaping double pendulum anti-sway control method based on phase plane trajectory planning is proposed. This method generates the required acceleration signal by designing an input shaper and calculates the acceleration switching time and amplitude of the trolley according to the phase plane swing angle and the physical constraints of the system. Through this strategy, it is ensured that the speed of the trolley and the swing angle of the load are always kept within the constraint range so that the trolley can reach the target position accurately. The comparative analysis of numerical simulation and existing control methods shows that the proposed control method can significantly reduce the swing angle amplitude and enable the system to enter the swing angle stable state faster. Numerical simulation and physical experiments show the effectiveness of the control method.Full article
Open AccessArticle
DeCGAN: Speech Enhancement Algorithm for Air Traffic Control
byHaijun Liang, Yimin He, Hanwen Chang and Jianguo Kong
Air traffic control (ATC) communication is susceptible to speech noise interference, which undermines the quality of civil aviation speech. To resolve this problem, we propose a speech enhancement model, termed DeCGAN, based on the DeConformer generative adversarial network. The model’s generator, the DeConformer [...] Read more.
Air traffic control (ATC) communication is susceptible to speech noise interference, which undermines the quality of civil aviation speech. To resolve this problem, we propose a speech enhancement model, termed DeCGAN, based on the DeConformer generative adversarial network. The model’s generator, the DeConformer module, combining a time frequency channel attention (TFC-SA) module and a deformable convolution-based feedforward neural network (DeConv-FFN), effectively captures both long-range dependencies and local features of speech signals. For this study, the outputs from two branches—the mask decoder and the complex decoder—were amalgamated to produce an enhanced speech signal. An evaluation metric discriminator was then utilized to derive speech quality evaluation scores, and adversarial training was implemented to generate higher-quality speech. Subsequently, experiments were performed to compare DeCGAN with other speech enhancement models on the ATC dataset. The experimental results demonstrate that the proposed model is highly competitive compared to existing models. Specifically, the DeCGAN model achieved a perceptual evaluation of speech quality (PESQ) score of 3.31 and short-time objective intelligibility (STOI) value of 0.96.Full article
Open AccessReview
byMien L. Trinh, Dung T. Nguyen, Long Q. Dinh, Mui D. Nguyen, De Rosal Ignatius Moses Setiadi and Minh T. Nguyen
This paper focuses on algorithms and technologies for unmanned aerial vehicles (UAVs) networking across different fields of applications. Given the limitations of UAVs in both computations and communications, UAVs usually need algorithms for either low latency or energy efficiency. In addition, coverage problems [...] Read more.
This paper focuses on algorithms and technologies for unmanned aerial vehicles (UAVs) networking across different fields of applications. Given the limitations of UAVs in both computations and communications, UAVs usually need algorithms for either low latency or energy efficiency. In addition, coverage problems should be considered to improve UAV deployment in many monitoring or sensing applications. Hence, this work firstly addresses common applications of UAV groups or swarms. Communication routing protocols are then reviewed, as they can make UAVs capable of supporting these applications. Furthermore, control algorithms are examined to ensure UAVs operate in optimal positions for specific purposes. AI-based approaches are considered to enhance UAV performance. We provide either the latest work or evaluations of existing results that can suggest suitable solutions for specific practical applications. This work can be considered as a comprehensive survey for both general and specific problems associated with UAVs in monitoring and sensing fields.Full article
Highly Accessed Articles
Latest Books
E-Mail Alert
News
Topics
Conferences
Special Issues
Topical Collections
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.