Algorithms (original) (raw)

Author / Affiliation / Email

Journal Description

Algorithms is a peer-reviewed, open access journal which provides an advanced forum for studies related to algorithms and their applications. Algorithms is published monthly online by MDPI. The European Society for Fuzzy Logic and Technology (EUSFLAT) is affiliated with Algorithms and their members receive discounts on the article processing charges.

Impact Factor: 1.8 (2023); 5-Year Impact Factor: 1.9 (2023)

Latest Articles

8 pages, 191 KiB

Open AccessEditorial

Algorithms for Game AI

Abstract

Games have long been benchmarks for AI algorithms and, with the boost of computational power and the application of new algorithms, AI systems have achieved superhuman performance in games for which it was once thought that they could only be mastered by humans [...] Read more.

Games have long been benchmarks for AI algorithms and, with the boost of computational power and the application of new algorithms, AI systems have achieved superhuman performance in games for which it was once thought that they could only be mastered by humans due to their high complexity [...]Full article

21 pages, 1045 KiB

Open AccessArticle

WIRE: A Weighted Item Removal Method for Unsupervised Rank Aggregation

byLeonidas Akritidis and Panayiotis Bozanis

Abstract

Rank aggregation deals with the problem of fusing multiple ranked lists of elements into a single aggregate list with improved element ordering. Such cases are frequently encountered in numerous applications across a variety of areas, including bioinformatics, machine learning, statistics, information retrieval, and [...] Read more.

Rank aggregation deals with the problem of fusing multiple ranked lists of elements into a single aggregate list with improved element ordering. Such cases are frequently encountered in numerous applications across a variety of areas, including bioinformatics, machine learning, statistics, information retrieval, and so on. The weighted rank aggregation methods consider a more advanced version of the problem by assuming that the input lists are not of equal importance. In this context, they first apply ad hoc techniques to assign weights to the input lists, and then, they study how to integrate these weights into the scores of the individual list elements. In this paper, we adopt the idea of exploiting the list weights not only during the computation of the element scores, but also to determine which elements will be included in the consensus aggregate list. More specifically, we introduce and analyze a novel refinement mechanism, called WIRE, that effectively removes the weakest elements from the less important input lists, thus improving the quality of the output ranking. We experimentally demonstrate the effectiveness of our method in multiple datasets by comparing it with a collection of state-of-the-art weighted and non-weighted techniques.Full article

►▼ Show Figures

29 pages, 351 KiB

Open AccessArticle

The Computability of the Channel Reliability Function and Related Bounds

byHolger Boche and Christian Deppe

Abstract

The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated [...] Read more.

The channel reliability function is a crucial tool for characterizing the dependable transmission of messages across communication channels. In many cases, the only upper and lower bounds of this function are known. We investigate the computability of the reliability function and its associated functions, demonstrating that the reliability function is not Turing computable. This also holds true for functions related to the sphere packing bound and the expurgation bound. Additionally, we examine the

R∞

function and zero-error feedback capacity, as they are vital in the context of the reliability function. Both the

R∞

function and the zero-error feedback capacity are not Banach–Mazur computable.Full article

17 pages, 371 KiB

Open AccessArticle

A Box-Bounded Non-Linear Least Square Minimization Algorithm with Application to the JWL Parameter Determination in the Isentropic Expansion for Highly Energetic Material Simulation

byYuri Caridi, Andrea Cucuzzella, Fabio Vicini and Stefano Berrone

Abstract

This work presents a robust box-constrained nonlinear least-squares algorithm for accurately fitting the Jones–Wilkins–Lee (JWL) equation of state parameters, which describes the isentropic expansion of detonation products from high-energy materials. In the energetic material literature, there are plenty of methods that address this [...] Read more.

This work presents a robust box-constrained nonlinear least-squares algorithm for accurately fitting the Jones–Wilkins–Lee (JWL) equation of state parameters, which describes the isentropic expansion of detonation products from high-energy materials. In the energetic material literature, there are plenty of methods that address this problem, and in some cases, it is not fully clear which method is employed. We provide a fully detailed numerical framework that explicitly enforces Chapman–Jouguet (CJ) constraints and systematically separates the contributions of different terms in the JWL expression. The algorithm leverages a trust-region Gauss–Newton method combined with singular value decomposition to ensure numerical stability and rapid convergence, even in highly overdetermined systems. The methodology is validated through comprehensive comparisons with leading thermochemical codes such as CHEETAH 2.0, ZMWNI, and EXPLO5. The results demonstrate that the proposed approach yields lower residual fitting errors and improved consistency with CJ thermodynamic conditions compared to standard fitting routines. By providing a reproducible and theoretically based methodology, this study advances the state of the art in JWL parameter determination and improves the reliability of energetic material simulations.Full article

►▼ Show Figures

23 pages, 1093 KiB

Open AccessArticle

ADDAEIL: Anomaly Detection with Drift-Aware Ensemble-Based Incremental Learning

byDanlei Li, Nirmal-Kumar C. Nair and Kevin I-Kai Wang

Abstract

Time series anomaly detection in streaming environments faces persistent challenges due to concept drift, which gradually degrades model reliability. In this paper, we propose Anomaly Detection with Drift-Aware Ensemble-based Incremental Learning (ADDAEIL), an unsupervised anomaly detection framework that incrementally adapts to concept drift [...] Read more.

Time series anomaly detection in streaming environments faces persistent challenges due to concept drift, which gradually degrades model reliability. In this paper, we propose Anomaly Detection with Drift-Aware Ensemble-based Incremental Learning (ADDAEIL), an unsupervised anomaly detection framework that incrementally adapts to concept drift in non-stationary streaming time series data. ADDAEIL integrates a hybrid drift detection mechanism that combines statistical distribution tests with structural-based performance evaluation of base detectors in Isolation Forest. This design enables unsupervised detection and continuous adaptation to evolving data patterns. Based on the estimated drift intensity, an adaptive update strategy selectively replaces degraded base detectors. This allows the anomaly detection model to incorporate new information while preserving useful historical behavior. Experiments on both real-world and synthetic datasets show that ADDAEIL consistently outperforms existing state-of-the-art methods and maintains robust long-term performance in non-stationary data streams.Full article

►▼ Show Figures

20 pages, 25324 KiB

Open AccessArticle

DGSS-YOLOv8s: A Real-Time Model for Small and Complex Object Detection in Autonomous Vehicles

bySiqiang Cheng, Lingshan Chen and Kun Yang

Abstract

Object detection in complex road scenes is vital for autonomous driving, facing challenges such as object occlusion, small target sizes, and irregularly shaped targets. To address these issues, this paper introduces DGSS-YOLOv8s, a model designed to enhance detection accuracy and high-FPS performance within [...] Read more.

Object detection in complex road scenes is vital for autonomous driving, facing challenges such as object occlusion, small target sizes, and irregularly shaped targets. To address these issues, this paper introduces DGSS-YOLOv8s, a model designed to enhance detection accuracy and high-FPS performance within the You Only Look Once version 8 small (YOLOv8s) framework. The key innovation lies in the synergistic integration of several architectural enhancements: the DCNv3_LKA_C2f module, leveraging Deformable Convolution v3 (DCNv3) and Large Kernel Attention (LKA) for better the capture of complex object shapes; an Optimized Feature Pyramid Network structure (Optimized-GFPN) for improved multi-scale feature fusion; the Detect_SA module, incorporating spatial Self-Attention (SA) at the detection head for broader context awareness; and an Inner-Shape Intersection over Union (IoU) loss function to improve bounding box regression accuracy. These components collectively target the aforementioned challenges in road environments. Evaluations on the Berkeley DeepDrive 100K (BDD100K) and Karlsruhe Institute of Technology and Toyota Technological Institute (KITTI) datasets demonstrate the model’s effectiveness. Compared to baseline YOLOv8s, DGSS-YOLOv8s achieves mean Average Precision (mAP)@50 improvements of 2.4% (BDD100K) and 4.6% (KITTI). Significant gains were observed for challenging categories, notably 87.3% mAP@50 for cyclists on KITTI, and small object detection (AP-small) improved by up to 9.7% on KITTI. Crucially, DGSS-YOLOv8s achieved high processing speeds suitable for autonomous driving, operating at 103.1 FPS (BDD100K) and 102.5 FPS (KITTI) on an NVIDIA GeForce RTX 4090 GPU. These results highlight that DGSS-YOLOv8s effectively balances enhanced detection accuracy for complex scenarios with high processing speed, demonstrating its potential for demanding autonomous driving applications.Full article

►▼ Show Figures

25 pages, 1991 KiB

Open AccessArticle

Crude Oil and Hot-Rolled Coil Futures Price Prediction Based on Multi-Dimensional Fusion Feature Enhancement

byYongli Tang, Zhenlun Gao, Ya Li, Zhongqi Cai, Jinxia Yu and Panke Qin

Abstract

To address the challenges in forecasting crude oil and hot-rolled coil futures prices, the aim is to transcend the constraints of conventional approaches. This involves effectively predicting short-term price fluctuations, developing quantitative trading strategies, and modeling time series data. The goal is to [...] Read more.

To address the challenges in forecasting crude oil and hot-rolled coil futures prices, the aim is to transcend the constraints of conventional approaches. This involves effectively predicting short-term price fluctuations, developing quantitative trading strategies, and modeling time series data. The goal is to enhance prediction accuracy and stability, thereby supporting decision-making and risk management in financial markets. A novel approach, the multi-dimensional fusion feature-enhanced (MDFFE) prediction method has been devised. Additionally, a data augmentation framework leveraging multi-dimensional feature engineering has been established. The technical indicators, volatility indicators, time features, and cross-variety linkage features are integrated to build a prediction system, and the lag feature design is used to prevent data leakage. In addition, a deep fusion model is constructed, which combines the temporal feature extraction ability of the convolution neural network with the nonlinear mapping advantage of an extreme gradient boosting tree. With the help of a three-layer convolution neural network structure and adaptive weight fusion strategy, an end-to-end prediction framework is constructed. Experimental results demonstrate that the MDFFE model excels in various metrics, including mean absolute error, root mean square error, mean absolute percentage error, coefficient of determination, and sum of squared errors. The mean absolute error reaches as low as 0.0068, while the coefficient of determination can be as high as 0.9970. In addition, the significance and stability of the model performance were verified by statistical methods such as a paired t-test and ANOVA analysis of variance. This MDFFE algorithm offers a robust and practical approach for predicting commodity futures prices. It holds significant theoretical and practical value in financial market forecasting, enhancing prediction accuracy and mitigating forecast volatility.Full article

►▼ Show Figures

25 pages, 5824 KiB

Open AccessArticle

Identifying Hubs Through Influential Nodes in Transportation Network by Using a Gravity Centrality Approach

byWorawit Tepsan, Aniwat Phaphuangwittayakul, Saronsad Sokantika and Napat Harnpornchai

Abstract

Hubs are strategic locations that function as central nodes within clusters of cities, playing a pivotal role in the distribution of goods, services, and connectivity. Identifying these vital hubs—through analyzing influential locations within transportation networks—is essential for effective urban planning, logistics optimization, and [...] Read more.

Hubs are strategic locations that function as central nodes within clusters of cities, playing a pivotal role in the distribution of goods, services, and connectivity. Identifying these vital hubs—through analyzing influential locations within transportation networks—is essential for effective urban planning, logistics optimization, and enhancing infrastructure resilience. This task becomes even more crucial in developing and less-developed countries, where such hubs can significantly accelerate urban growth and drive economic development. However, existing hub identification approaches face notable limitations. Traditional centrality measures often yield low variance in node scores, making it difficult to distinguish truly influential nodes. Moreover, these methods typically rely solely on either local metrics or global network structures, limiting their effectiveness. To address these challenges, we propose a novel method called Hybrid Community-based Gravity Centrality (HCGC), which integrates local influence measures, community detection, and gravity-based modeling to more effectively identify influential nodes in complex networks. Through extensive experiments, we demonstrate that HCGC consistently outperforms existing methods in terms of spreading ability across varying truncation radii. To further validate our approach, we introduce ThaiNet, a newly constructed real-world transportation network dataset. The results show that HCGC not only preserves the strengths of traditional local approaches but also captures broader structural patterns, making it a powerful and practical tool for real-world network analysis.Full article

►▼ Show Figures

27 pages, 2140 KiB

Open AccessArticle

Effective Detection of Malicious Uniform Resource Locator (URLs) Using Deep-Learning Techniques

byYirga Yayeh Munaye, Aneas Bekele Workneh, Yenework Belayneh Chekol and Atinkut Molla Mekonen

Abstract

The rapid growth of internet usage in daily life has led to a significant increase in cyber threats, with malicious URLs serving as a common cybercrime. Traditional detection methods often suffer from high false alarm rates and struggle to keep pace with evolving [...] Read more.

The rapid growth of internet usage in daily life has led to a significant increase in cyber threats, with malicious URLs serving as a common cybercrime. Traditional detection methods often suffer from high false alarm rates and struggle to keep pace with evolving threats due to outdated feature extraction techniques and datasets. To address these limitations, we propose a deep learning-based approach aimed at developing an effective model for detecting malicious URLs. Our proposed method, the Char2B model, leverages a fusion of BERT and CharBiGRU embedding, further enhanced by a Conv1D layer with a kernel size of three and unit-sized stride and padding. After combining the embedding, we used the BERT model as a baseline for comparison. The study involved collecting a dataset of 87,216 URLs, comprising both benign and malicious samples sourced from the open project directory (DMOZ), PhishTank, and Any.Run. Models were trained using the training set and evaluated on the test set using standard metrics, including accuracy, precision, recall, and F1-score. Through iterative refinement, we optimized the model’s performance to maximize its effectiveness. As a result, our proposed model achieved 98.50% accuracy, 98.27% precision, 98.69% recall, and a 98.48% F1-score, outperforming the baseline BERT model. Additionally, the false positive rate of our model was 0.017 better than the baseline model’s 0.018. By effectively extracting and utilizing informative features, the model accurately classified URLs into benign and malicious categories, thereby improving detection capabilities. This study highlights the significance of our deep learning approach in strengthening cybersecurity by integrating advanced algorithms that enhance detection accuracy, bolster defense mechanisms, and contribute to a safer digital environment.Full article

►▼ Show Figures

40 pages, 3827 KiB

Open AccessReview

A Review of Hybrid Vehicles Classification and Their Energy Management Strategies: An Exploration of the Advantages of Genetic Algorithms

byYuede Pan, Kaifeng Zhong, Yubao Xie, Mingzhang Pan, Wei Guan, Li Li, Changye Liu, Xingjia Man, Zhiqing Zhang and Mantian Li

Abstract

This paper presents a comprehensive analysis of hybrid electric vehicle (HEV) classification and energy management strategies (EMS), with a particular emphasis on the application and potential of genetic algorithms (GAs) in optimizing energy management strategies for hybrid electric vehicles. Initially, the paper categorizes [...] Read more.

This paper presents a comprehensive analysis of hybrid electric vehicle (HEV) classification and energy management strategies (EMS), with a particular emphasis on the application and potential of genetic algorithms (GAs) in optimizing energy management strategies for hybrid electric vehicles. Initially, the paper categorizes hybrid electric vehicles based on mixing rates and power source configurations, elucidating the operational principles and the range of applicability for different hybrid electric vehicle types. Following this, the two primary categories of energy management strategies—rule-based and optimization-based—are introduced, emphasizing their significance in enhancing energy efficiency and performance, while also acknowledging their inherent limitations. Furthermore, the advantages of utilizing genetic algorithms in optimizing energy management systems for hybrid vehicles are underscored. As a global optimization technique, genetic algorithms are capable of effectively addressing complex multi-objective problems by circumventing local optima and identifying the global optimal solution. The adaptability and versatility of genetic algorithms allow them to conduct real-time optimization across diverse driving conditions. Genetic algorithms play a pivotal role in hybrid vehicle energy management and exhibit a promising future. When combined with other optimization techniques, genetic algorithms can augment the optimization potential for tackling complex tasks. Nonetheless, the advancement of this technique is confronted with challenges such as cost, battery longevity, and charging infrastructure, which significantly influence its widespread adoption and application.Full article

►▼ Show Figures

15 pages, 349 KiB

Open AccessArticle

Evolutionary Optimization for the Classification of Small Molecules Regulating the Circadian Rhythm Period: A Reliable Assessment

byAntonio Arauzo-Azofra, Jose Molina-Baena and Maria Luque-Rodriguez

Abstract

The circadian rhythm plays a crucial role in regulating biological processes, and its disruption is linked to various health issues. Identifying small molecules that influence the circadian period is essential for developing targeted therapies. This study explores the use of evolutionary optimization techniques [...] Read more.

The circadian rhythm plays a crucial role in regulating biological processes, and its disruption is linked to various health issues. Identifying small molecules that influence the circadian period is essential for developing targeted therapies. This study explores the use of evolutionary optimization techniques to enhance the classification of these molecules. We applied a genetic algorithm to optimize feature selection and classification performance. Several tree-based learning classification algorithms (Decision Trees, Extra Trees, Random Forest, XGBoost) and a distance-based classifier (_k_NN) were employed. Their performance was evaluated using accuracy and F1-score, while considering their generalization ability with a validation set. The findings demonstrate that the proposed genetic algorithm improves classification accuracy and reduces overfitting compared to baseline models. Additionally, the use of variance in accuracy as a penalty factor may enhance the model’s reliability for real-world applications. Our study confirms that evolutionary optimization is an effective strategy for classifying small molecules regulating the circadian rhythm. The proposed approach not only improves predictive performance but also ensures a more robust model.Full article

►▼ Show Figures

28 pages, 1589 KiB

Open AccessSystematic Review

ChatGPT in Education: A Systematic Review on Opportunities, Challenges, and Future Directions

byYirga Yayeh Munaye, Wasyihun Admass, Yenework Belayneh, Atinkut Molla and Mekete Asmare

Abstract

This study presents a systematic review on the integration of ChatGPT in education, examining its opportunities, challenges and future directions. Utilizing the PRISMA framework, the review analyzes 40 peer-reviewed studies published from 2020 to 2024. Opportunities identified include the potential for ChatGPT to [...] Read more.

This study presents a systematic review on the integration of ChatGPT in education, examining its opportunities, challenges and future directions. Utilizing the PRISMA framework, the review analyzes 40 peer-reviewed studies published from 2020 to 2024. Opportunities identified include the potential for ChatGPT to foster individualized educational experiences, tailoring learning to meet the needs of individual students. Its capacity to automate grading and assessments is noted as a time-saving measure for educators, allowing them to focus on more interactive and engaging teaching methods. However, the study also addresses significant challenges associated with utilizing ChatGPT in educational contexts. Concerns regarding academic integrity are paramount, as students might misuse ChatGPT for cheating or plagiarism. Additionally, issues such as ChatGPT bias are highlighted, raising questions about the fairness and inclusivity of ChatGPT-generated content in educational materials. The necessity for ethical governance is emphasized, underscoring the importance of establishing clear policies to guide the responsible use of AI in education. The findings highlight several key trends regarding ChatGPT’s role in enhancing personalized learning, automating assessments, and providing support to educators. The review concludes by stressing the importance of identifying best practices to optimize ChatGPT’s effectiveness in teaching and learning environments. There is a clear need for future research focusing on adaptive ChatGPT regulation, which will be essential as educational stakeholders seek to understand and manage the long-term impacts of ChatGPT integration on pedagogy.Full article

►▼ Show Figures

16 pages, 1400 KiB

Open AccessArticle

An RMSprop-Incorporated Latent Factorization of Tensor Model for Random Missing Data Imputation in Structural Health Monitoring

byJingjing Yang

Abstract

In structural health monitoring (SHM), ensuring data completeness is critical for enhancing the accuracy and reliability of structural condition assessments. SHM data are prone to random missing values due to signal interference or connectivity issues, making precise data imputation essential. A latent factorization [...] Read more.

In structural health monitoring (SHM), ensuring data completeness is critical for enhancing the accuracy and reliability of structural condition assessments. SHM data are prone to random missing values due to signal interference or connectivity issues, making precise data imputation essential. A latent factorization of tensor (LFT)-based method has proven effective for such problems, with optimization typically achieved via stochastic gradient descent (SGD). However, SGD-based LFT models and other imputation methods exhibit significant sensitivity to learning rates and slow tail-end convergence. To address these limitations, this study proposes an RMSprop-incorporated latent factorization of tensor (RLFT) model, which integrates an adaptive learning rate mechanism to dynamically adjust step sizes based on gradient magnitudes. Experimental validation on a scaled bridge accelerometer dataset demonstrates that RLFT achieves faster convergence and higher imputation accuracy compared to state-of-the-art models including SGD-based LFT and the long short-term memory (LSTM) network, with improvements of at least 10% in both imputation accuracy and convergence rate, offering a more efficient and reliable solution for missing data handling in SHM.Full article

►▼ Show Figures

24 pages, 2877 KiB

Open AccessArticle

Memory-Efficient Batching for Time Series Transformer Training: A Systematic Evaluation

byPhanwadee Sinthong, Nam Nguyen, Vijay Ekambaram, Arindam Jati, Jayant Kalagnanam and Peeravit Koad

Abstract

Transformer-based time series models are being increasingly employed for time series data analysis. However, their training remains memory intensive, especially with high-dimensional data and extended look-back windows, while model-level memory optimizations are well studied, the batch formation process remains an underexplored factor to [...] Read more.

Transformer-based time series models are being increasingly employed for time series data analysis. However, their training remains memory intensive, especially with high-dimensional data and extended look-back windows, while model-level memory optimizations are well studied, the batch formation process remains an underexplored factor to performance inefficiency. This paper introduces a memory-efficient batching framework based on view-based sliding windows operating directly on GPU-resident tensors. This approach eliminates redundant data materialization caused by tensor stacking and reduces data transfer volumes without modifying model architectures. We present two variants of our solution: (1) per-batch optimization for datasets exceeding GPU memory, and (2) dataset-wise optimization for in-memory workloads. We evaluate our proposed batching framework systematically using peak GPU memory consumption and epoch runtime as efficiency metrics across varying batch sizes, sequence lengths, feature dimensions, and model architectures. Results show consistent memory savings, averaging 90% and runtime improvements of up to 33% across multiple transformer-based models (Informer, Autoformer, Transformer, and PatchTST) and a linear baseline (DLinear) without compromising model accuracy. We extensively validate our method using synthetic and standard real-world benchmarks, demonstrating accuracy preservation and practical scalability in distributed GPU environments. The proposed method highlights batch formation process as a critical component for improving training efficiency.Full article

►▼ Show Figures

22 pages, 9553 KiB

Open AccessArticle

Testing the Effectiveness of Voxels for Structural Analysis

bySara Gonizzi Barsanti and Ernesto Nappi

Abstract

To assess the condition of cultural heritage assets for conservation, reality-based 3D models can be analyzed using FEA (finite element analysis) software, yielding valuable insights into their structural integrity. Three-dimensional point clouds obtained through photogrammetric and laser scanning techniques can be transformed into [...] Read more.

To assess the condition of cultural heritage assets for conservation, reality-based 3D models can be analyzed using FEA (finite element analysis) software, yielding valuable insights into their structural integrity. Three-dimensional point clouds obtained through photogrammetric and laser scanning techniques can be transformed into volumetric data suitable for FEA by utilizing voxels. When directly using the point cloud data in this process, it is crucial to employ the highest level of accuracy. The fidelity of r point clouds can be compromised by various factors, including uncooperative materials or surfaces, poor lighting conditions, reflections, intricate geometries, and limitations in the precision of the instruments. This data not only skews the inherent structure of the point cloud but also introduces extraneous information. Hence, the geometric accuracy of the resulting model may be diminished, ultimately impacting the reliability of any analyses conducted upon it. The removal of noise from point clouds, a crucial aspect of 3D data processing, known as point cloud denoising, is gaining significant attention due to its ability to reveal the true underlying point cloud structure. This paper focuses on evaluating the geometric precision of the voxelization process, which transforms denoised 3D point clouds into volumetric models suitable for structural analyses.Full article

►▼ Show Figures

23 pages, 676 KiB

Open AccessArticle

Numerical and Theoretical Treatments of the Optimal Control Model for the Interaction Between Diabetes and Tuberculosis

bySaburi Rasheed, Olaniyi S. Iyiola, Segun I. Oke and Bruce A. Wade

Abstract

We primarily focus on the formulation, theoretical, and numerical analyses of a non-autonomous model for tuberculosis (TB) prevention and control programs in a population where individuals suffering from the double trouble of tuberculosis and diabetes are present. The model incorporates four time-dependent control [...] Read more.

We primarily focus on the formulation, theoretical, and numerical analyses of a non-autonomous model for tuberculosis (TB) prevention and control programs in a population where individuals suffering from the double trouble of tuberculosis and diabetes are present. The model incorporates four time-dependent control functions, saturated treatment of non-infectious individuals harboring tuberculosis, and saturated incidence rate. Furthermore, the basic reproduction number of the autonomous form of the proposed optimal control mathematical model is calculated. Sensitivity indexes regarding the constant control parameters reveal that the proposed control and preventive measures will reduce the tuberculosis burden in the population. This study establishes that the combination of campaigns that teach people how the development of tuberculosis and diabetes can be prevented, a treatment strategy that provides saturated treatment to non-infectious individuals exposed to tuberculosis infections, and prompt effective treatment of individuals infected with tuberculosis disease is the optimal strategy to achieve zero TB by 2035.Full article

►▼ Show Figures

16 pages, 2603 KiB

Open AccessArticle

A Novel Model for Accurate Daily Urban Gas Load Prediction Using Genetic Algorithms

byXi Chen, Feng Wang, Li Xu, Taiwu Xia, Minhao Wang, Gangping Chen, Longyu Chen and Jun Zhou

Abstract

With the increase of natural gas consumption year by year, the shortage of urban natural gas reserves leads to the increasingly serious gas supply–demand imbalance. It is particularly important to establish a correct and reasonable gas daily load forecasting model to ensure the [...] Read more.

With the increase of natural gas consumption year by year, the shortage of urban natural gas reserves leads to the increasingly serious gas supply–demand imbalance. It is particularly important to establish a correct and reasonable gas daily load forecasting model to ensure the realization of forecasting function and the accuracy and reliability of calculation results. Most of the current prediction models are combined with the characteristics of gas data and prediction models, and the influencing factors are often considered less. In order to solve this problem, the basic concept of multiple weather parameter (MWP) was introduced, and the influence of factors such as the average temperature, solar radiation, cumulative temperature, wind power, and temperature change of the building foundation on the daily load of urban gas were analyzed. A multiple weather parameter–daily load prediction (MWP-DLP) model based on System Thermal Days (STD) was established, and the genetic algorithm was used to solve the model. The daily gas load in a city was predicted, and the results were analyzed. The results show that the trend between the predicted value of gas daily load obtained by the MWP-DLP model and the actual value was basically consistent. The maximum relative error was 8.2%, and the mean absolute percentage error (MAPE) was 2.68%. The feasibility of the MWP- DLP prediction model was verified, which has practical significance for gas companies to reasonably formulate and decide peak shaving schemes to reserve natural gas.Full article

►▼ Show Figures

14 pages, 698 KiB

Open AccessArticle

Inferring the Timing of Antiretroviral Therapy by Zero-Inflated Random Change Point Models Using Longitudinal Data Subject to Left-Censoring

byHongbin Zhang, McKaylee Robertson, Sarah L. Braunstein, David B. Hanna, Uriel R. Felsen, Levi Waldron and Denis Nash

Abstract

We propose a new random change point model that utilizes routinely recorded individual-level HIV viral load data to estimate the timing of antiretroviral therapy (ART) initiation in people living with HIV. The change point distribution is assumed to follow a zero-inflated exponential distribution [...] Read more.

We propose a new random change point model that utilizes routinely recorded individual-level HIV viral load data to estimate the timing of antiretroviral therapy (ART) initiation in people living with HIV. The change point distribution is assumed to follow a zero-inflated exponential distribution for the longitudinal data, which is also subject to left-censoring, and the underlying data-generating mechanism is a nonlinear mixed-effects model. We extend the Stochastic EM (StEM) algorithm by combining a Gibbs sampler with a Metropolis–Hastings sampling. We apply the method to real HIV data to infer the timing of ART initiation since diagnosis. Additionally, we conduct simulation studies to assess the performance of our proposed method.Full article

►▼ Show Figures

27 pages, 552 KiB

Open AccessArticle

Automatic Generation of Synthesisable Hardware Description Language Code of Multi-Sequence Detector Using Grammatical Evolution

byBilal Majeed, Rajkumar Sarma, Ayman Youssef, Douglas Mota Dias and Conor Ryan

Abstract

Quickly designing digital circuits that are both correct and efficient poses significant challenges. Electronics, especially those incorporating sequential logic circuits, are complex to design and test. While Electronic Design Automation (EDA) tools aid designers, they do not fully automate the creation of synthesisable [...] Read more.

Quickly designing digital circuits that are both correct and efficient poses significant challenges. Electronics, especially those incorporating sequential logic circuits, are complex to design and test. While Electronic Design Automation (EDA) tools aid designers, they do not fully automate the creation of synthesisable circuits that can be directly translated into hardware. This paper introduces a system that employs Grammatical Evolution (GE) to automatically generate synthesisable Hardware Description Language (HDL) code for the Finite State Machine (FSM) of a Multi-Sequence Detector (MSD). This MSD differs significantly from prior work as it can detect multiple sequences in contrast to the single-sequence detectors discussed in existing literature. Sequence Detectors (SDs) are essential in circuits that detect sequences of specific events to produce timely alerts. The proposed MSD applies to a real-time vending machine scenario, enabling customer selections upon successful payment. However, this technique can evolve any MSD, such as a traffic light control system or a robot navigation system. We examine two parent selection techniques, Tournament Selection (TS) and Lexicase Selection (LS), demonstrating that LS performs better than TS, although both techniques successfully produce synthesisable hardware solutions. Both hand-crafted “Gold” and evolved circuits are synthesised using Generic Process Design Kit (GPDK) technologies at 45 nm, 90 nm, and 180 nm scales, demonstrating their efficacy.Full article

►▼ Show Figures

15 pages, 920 KiB

Open AccessArticle

A Novel Connected-Components Algorithm for 2D Binarized Images

byCostin-Anton Boiangiu, Giorgiana-Violeta Vlăsceanu, Constantin-Eduard Stăniloiu, Nicolae Tarbă and Mihai-Lucian Voncilă

Abstract

This paper introduces a new memory-efficient algorithm for connected-components labeling in binary images, which is based on run-length encoding. Unlike conventional pixel-based methods that scan and label individual pixels using global buffers or disjoint-set structures, our approach encodes rows as linked segments and [...] Read more.

This paper introduces a new memory-efficient algorithm for connected-components labeling in binary images, which is based on run-length encoding. Unlike conventional pixel-based methods that scan and label individual pixels using global buffers or disjoint-set structures, our approach encodes rows as linked segments and merges them using a union-by-size strategy. We accelerate run detection by using a precomputed 16-bit cache of binary patterns, allowing for fast decoding without relying on bitwise CPU instructions. When compared against other run-length encoded algorithms, such as the Scan-Based Labeling Algorithm or Run-Based Two-Scan, our method achieves up to 35% faster on most real-world datasets. While other binary-optimized algorithms, such as Bit-Run Two-Scan and Bit-Merge Run Scan, are up to 45% faster than our algorithm, they require much higher memory usage. Compared to them, our method tends to reduce memory consumption on some large document datasets by up to 80%.Full article

►▼ Show Figures

Highly Accessed Articles

Latest Books

E-Mail Alert

News

Topics

Conferences

Special Issues

Topical Collections

We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.