ABIR SHAKE IBNA | Arkansas State University (original) (raw)
Papers by ABIR SHAKE IBNA
Journal of Computer Science and Technology Studies, 2024
Cardiovascular vascular disease (CVD) is a leading cause of death in the world. By 2025, it is es... more Cardiovascular vascular disease (CVD) is a leading cause of death in the world. By 2025, it is estimated that 23.6 million people will be attacked by CVD. Thus, the health care industry is established in order to collect a large number of CVD information of cardiovascular disease and to support doctors in finding and recognizing its potential risk factors of CVD through mining and analyzing the information. This structured and unstructured case information may be used to find out potential patterns of diseases and symptoms using deep learning algorithms. The risk factors associated with cardiovascular disease are known from epidemiology, and this is the first prospective investigation on the condition in the community free mobility population. Physicians can anticipate cardiovascular disease and take early action if it can be predicted. Clinical data analysis is therefore considered to be the prediction of cardiovascular disease. The main contents and contributions are as follows: We start off with trying to predict using the classic data mining and machine learning techniques, and the results are not at all ideal. Generally speaking, after analysis, it is mainly caused by imbalance of data sets. To solve the problem of CVD data imbalance, a SMOTE oversampling method is proposed aiming at the imbalance of cardiovascular data collected in the Framingham community. In order to ensure accurate data collection during operation, the relationship between LSTM(P) and unit state is then tried, and the prediction technique of cardiovascular disease utilising LSTM(P) is realised. Lastly, tests are conducted to confirm the 4434 individuals' initial medical information in the data set. The algorithm has an MCC score of 0.96 and an accuracy of about 94%.
Journal of Computer Science and Technology Studies, 2024
Skin cancer of the most dangerous type, melanoma, requires an early and accurate diagnosis for it... more Skin cancer of the most dangerous type, melanoma, requires an early and accurate diagnosis for its treatment to reduce mortality and increase the number of positive outcomes. Even with the availability of better imaging and diagnostic techniques, it is still difficult to differentiate between benign lesions and malignant melanoma because of overlapping features, noisy images and images with artefacts such as hair and glare. To overcome these challenges, this research adopts deep learning models to classify skin lesions based on images from the ISIC Archive dataset. The study establishes a strong two-stage classification framework. Therefore, noise reduction, ROI cutting, and data enhancement techniques are used for data pre-processing. Second, lesion classification is performed using a ResNet-based convolutional neural network (CNN) architecture. The model is trained and validated on a balanced dataset that contains an equal number of benign and malignant lesion categories. Using accuracy, precision, recall, F1 score and AUC, the system can be assessed and compared to other state-of-art approaches. The findings show that the proposed model has a high level of classification performance and a high level of discriminative ability between melanoma and benign lesions. The ROC curve effectively exhibits the model's performance and accuracy, and the confusion matrix reveals tendencies to misclassify and where it should be improved. The application of sophisticated preprocessing methods improves model performance, responding to the issues arising from the presence of noise in data. This research is valuable to the field of dermatological diagnostics as it offers a scalable, automated means of skin lesion classification. The proposed framework can be applied clinically to assist dermatologists in the detection of early melanoma and, therefore benefit patients. Subsequent studies will address the development of combined approaches and the improvement of interpretability aids in order to increase the diagnostic accuracy and practical applicability of the methods in clinical practice.
Journal of Computer Science and Technology Studies, 2024
In this study, six convolutional neural network (CNN) architectures, VGG16, Inception-v3, ResNet,... more In this study, six convolutional neural network (CNN) architectures, VGG16, Inception-v3, ResNet, MobileNet, NasNet, and EfficientNet are tested on classifying dermatological lesions. The research preprocesses and features extracts skin lesions data to achieve an accurate skin lesion classification in employing two benchmark datasets, HAM10000 and ISIC-2019. The CNN models then extract features from the filtered, resized images (uniform dimensions: 128 × 128 × 3 pixels). These results show that EfficientNet consistently achieves higher accuracy, precision, recall, and F1-score than any other model on melanoma, basal cell carcinoma and actinic keratoses, with 94.0%, 92.0%, 93.8%, respectively. The competitive performance of NasNet is also demonstrated for eczema and psoriasis. This study concludes that proper preprocessing and optimized CNN architecture are important for dermatological image classification. The results are promising, however, challenges such as the imbalance in the datasets and the requirement for larger ethically gathered datasets exist. For future work, dataset diversity will be improved, along with model generalization, through interdisciplinary collaboration and advanced CNN architectures.
Institute of Electrical and Electronics Engineers (IEEE), 2024
The main issue is undocumented migration, it presents a significant concern to public health due ... more The main issue is undocumented migration, it presents a significant concern to public health due to the potential spread of infectious illnesses and overall health risks. Data was gathered between January 2018 and December 2022. The present study employs contemporary technology methodologies, including big data analytics and machine learning, to better understand health risks and the dissemination patterns of prominent diseases among undocumented immigrants. The main infectious diseases, included Tuberculosis, hepatitis, and chronic disorders. Interpretation: The study findings indicate a greater likelihood of specific contagious diseases among undocumented immigrants. The applied machine learning models successfully obtained diseases health risk information. According to our experimental analysis, the Convolutional Neural Network and K-Nearest Neighbors model achieved the highest performance with 90% Precision, 90% Recall, and 80% F1-Score, respectively, compared to other machine learning techniques - Logistic Regression (80% Precision, 80% Recall, and a 70% F1-Score), Decision Tree (70% Precision, 60% Recall, 70% F1-Score), and Naïve Bayes (50% Precision, 80% Recall, and 60% F1-Score). The study will be helpful in promoting scrutiny in analyzing public health records and understanding of diseases associated with undocumented immigrants in the U.S.
Institute of Electrical and Electronics Engineers(IEEE), 2024
In diagnostics, accurate and timely identification of brain tumors can influence the outcome of t... more In diagnostics, accurate and timely identification of brain tumors can influence the outcome of the patient's treatment plan and prognosis. This research proposes RanMer-Former, a novel model combining Vision Transformers (ViTs), Explainable AI (XAI) with Grad-CAM, and token merging methods for effective MRI-based brain tumor detection. The dataset comprises 7,023 MRI scans across four categories: Thus, it has been classified as either having Glioma, Meningioma, Pituitary tumors or No Tumor. RanMerFormer outperformed a baseline CNN model, achieving an accuracy of 89.7%, precision of 90.1%, recall of 89.5%, and an F1 score of 89.8%. The Grad-CAM visualizations provided confirmation to the rationale made by the model to focus on certain regions of the tumor. This research demonstrates the application of RanMerFormer in clinical practice and suggests an effective approach to diagnose brain tumors.
Journal of Computer Science and Technology Studies, 2024
Parkinson’s Disease (PD) is a serious chronic illness known to slow the motor function of a human... more Parkinson’s Disease (PD) is a serious chronic illness known to slow the motor function of a human being as it affects movement and speech. There are significant benefits of early diagnosis of the disorder and it is essential that PD is diagnosed as early as possible. This paper assesses the applicability of stateof-art vocal features which are Vocal Tract Length Normalization (VTLN), Empirical Mode Decomposition (EMD), and Continuous Wavelet Transform (CWT) in combination with the recent Machine Learning (ML) algorithm for the identification of PD. Hence, we performed the performance assessment of different types of models such as Explainable Boosting Machine (EBM), Fast and Lightweight AutoML (FLAML), as well as NGBoost using 195 recorded vocal data sets. EBM was found to be the model with the highest accuracy of 86. 67%, and the AUC was 87. 33% for the same model and FLAML demonstrated a sensitivity score of 100%. The results of this work shed light on how sufficient analysis of the vocal material may be effectively combined with the contemporary ML algorithms to enhance the accuracy of PD identification.
Journal of Computer Science and Technology Studies, 2024
This paper aims to provide a systematic review of the state of the art in the use of deep neural ... more This paper aims to provide a systematic review of the state of the art in the use of deep neural networks (DNNs) in medical imaging, an area that has been recently developed because of the emergence of artificial intelligence (AI) and machine learning (ML). Deep Neural Networks including Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) have shown excellence in handling of a gigantic imaging data and assisting in diagnostics, treatment planning and patient care. This review also focuses on the breakthroughs of DNNs in different imaging tasks including classification, segmentation, registration, and detection, and has highlighted its potential to enhance diagnostic accuracy in different organs like the brain, lung and chest. Some of the key problems related to DNN deployment are also considered, including the problems that arise due to limitations of data, computational power, and model interpretability. That is why innovations such as transfer learning and synthetic data acquisition contributed to reducing these problems, thereby improving model performance with limited data. The paper concludes by discussing future works where the emphasis is made on the higher interpretability of the models and the combination of clinical records with images. In this paper, we attempt to offer a comprehensive review of the latest developments in DNNs in medical image analysis and delineate potential research directions to help researchers and practitioners interested in applying DNNs for medical imaging tasks.
Elsevier, 2024
Lung cancer is one of the most frequent causes of death globally, mainly attributed to inadequate... more Lung cancer is one of the most frequent causes of death globally, mainly attributed to inadequate diagnostic methods and treatment opportunities. Detecting diseases at an early stage greatly improves the chances of survival, and even higher accuracy can be achieved with the help of machine learning tools. The work described in this study proposes the development of an optimized model for predicting lung cancer using deep learning and assembly on the public dataset. The models used include ResNet101 and VER-Net, as well as an ensemble model, CatBoost, and Random Forest with data oversampling done through ADASYN. To compare the effectiveness of the models, evaluation measures included Accuracy, Precision, Recall, F1-Score, and AUC. Among all the models, VER-Net reveals the highest results with an Accuracy of 97%, AUC of 98%, and Precision of 100%. These results suggest how our models could aid in early lung cancer detection. This approach will be extended to other health conditions in future work, and it will attempt to augment the models' performance by increasing the size of the data set and exploring federated learning for data privacy purposes.
International Journal of Recent Scientific Research, 2018
Data mining is the process of discovering the knowledge by analysing and extracting the data from... more Data mining is the process of discovering the knowledge by analysing and extracting the data from various highly repositories and the result bases on the useful and functional information's for the user. By the passage of time data mining is growing very vastly and became the famous technology by analysing and extraction of knowledge. We are standing at the point where life can have a better understanding of the problems. There was a time to start an active research on data mining but the limitation of this technology is under predictions as; is this technology has any limits for the future or it is limitless towards the growing world? Why only data mining technology is involves in the refining process of data? How efficiently the future relies on this technology? Is this new technology is so capable of being popular and more powerful in all respective fields? What are its limitations and how it is dominating the future? In this paper we have revealed the facts of growing fields with this manifesto and how it is affecting anonymously and how reliable the future is on this technology? And why this technology is so important for future? et.al, 1996 5).(Rygielski et.al, 2002) narrates as "Data mining is a set of methods used in the knowledge discovery process to distinguish previously unknown relationships and patterns within the data" 6 .The most dynamic definition of data mining according to the (Prabakaran et.al, 2018 7), "data mining is the process of Decision Tree, Neural Networks, Rule Inductions, Neatest Neighbours and Genetic Algorithms".
IJCSNS International Journal of Computer Science and Network Security, 2019
Since the last decade, data mining is contributing nicely and becoming very rich in different ind... more Since the last decade, data mining is contributing nicely and becoming very rich in different industries, especially data mining is working tremendously in healthcare section to predict the different life taking diseases on time. The introduced applications in data mining can be classified in to two subsets: the set of policy development and the set of decision-making or the decision support. By the huge amount of work in data mining but still it is hard to discover the priceless literature in the area of this technology under healthcare. During the research, number of papers has been reconsidered in this manifesto and comprises them in to the single platform for better reviews and understanding in term of saving time and efforts in terms of schemes, algorithms, techniques and results. This paper comprises the contribution of various researchers of healthcare sectors in a single platform under data mining categorization for predictions and decision-making by elaborating and mentioning different techniques and algorithms working out since decade to cure the life taking diseases like Breast cancer, Heart attacks, HIV, Lungs cancer, Skin infections with their mean accuracy by defining various studies.
Journal of Biotech Research, 2019
The advanced approach to modeling the logistic regression with fractional polynomials is applied ... more The advanced approach to modeling the logistic regression with fractional polynomials is applied in place of the traditional linear predictors to group the continuous covariates for the healthcare dataset. The real-life data obtained from the 500 of diabetic patients in northwestern Nigeria. The statistical modeling and predictions of finding the group covariates analytically based on the patients’ variables, age and the occupation, by the theories of “Royston and Altman” and “Royston and Sauerbrei”. The algorithm in terms of the selection for key factors with the properties congregates at ϕ (3, 3) with the deviance ratio of 113.00 and the log likelihood assessment of -56.50 for the model results of patients’ age. For the patients’ occupation, the algorithm for the key factors with
extensive outcomes converged at ϕ (-2, 3) with the deviance ratio of 111.36 and the log likelihood assessment of -56.43. The analysis modeling approach for the second standard method with the fractional polynomials provides the excellent results on the healthcare dataset to investigate the diabetic status. The method is also sufficient for
the metadata of different disease because it produces the minimum deviance and maximum log likelihood values.
Journal of Economics, Finance and Accounting Studies, 2024
In this paper we examine the role of machine learning (ML) in predicting stock market trends with... more In this paper we examine the role of machine learning (ML) in predicting stock market trends within BRICS economies. Complex, interdependent global and regional economic factors are today and will in the future increasingly influence stock markets, which necessitates innovative techniques for trend analysis. Using this state of the art ML models Support Vector Machines (SVMs), Random Forests, and neural networks the study predicts market fluctuations based on historical stock data, economic indicators and geopolitical events. This research emphasizes the increasing role of deep learning, especially with models such as Transformers and LSTMs, to meet the demand for high accuracy predictive systems in the volatile market. Its analysis brings model performance into comparison of BRICS nations, taking into account the peculiar financial and economic behavior peculiar to each of them. These results illuminate how ML can provide actionable intelligence for investors and policymakers to better manage risk and better make strategic investments. Findings from the study underscore the requirement for adopting sophisticated data driven tools in order to negotiate the intricacies of globalized financial systems. This study also explains the basis in helping us understand how machine learning changes the perspective on stock market analysis. It is a great source to understand how different ML techniques such as Support Vector Machines which are good at classification problems and Random Forests, which are known to handle over fitting, can be used on a financial dataset. It shows cutting edge tools for market prediction such as deep learning models like LSTMs, which are able to handle sequential time series data, or Transformers that further improve the traditional architectures with attention mechanisms. The paper also discusses data preprocessing methods, such as feature engineering and normalization, and the importance of their inclusion in improving model performance. This research shows the value of ML literacy and provides future financial analysts and decision makers with tools for addressing market volatility in a data driven and strategic context.
Journal of Economics, Finance and Accounting Studies, 2025
This paper analyzes how the Artificial Intelligence (AI) and Machine Learning (ML) are bridging t... more This paper analyzes how the Artificial Intelligence (AI) and Machine Learning (ML) are bridging the gap between economic growth in the BRICS countries. BRICS countries are emerging economies that are challenged by increasing income inequality, industrial transformation and the need for infrastructure development. Driven by AI, this study applies data analytics to macroeconomic datasets, tracking down patterns and functional takeaways regarding policy formulation and strategic decision making. The research employs techniques, including predictive modeling, clustering, and natural language processing (NLP), in areas such as trade optimization, resource allocation and labour market analysis. Case examples document successful introduction of AI systems to solve critical economic problems, from increasing healthcare access to raising productivity in agriculture. The findings illustrate the role of AI and ML in helping BRICS policymakers to an informed, data driven development. The research puts AI as core to the process of economic advancement, a solution to developmental gaps and a driver for growth. This research contributes both to its practical outcomes and by providing insights into how AI and ML can solve the complex economic problems of emerging markets. The paper introduces predictive modeling, which anticipates economic trends based on past data and clustering which groups similar economic behaviors to find patterns as tools that are important in economic analysis. Further, Natural Language Processing (NLP) is covered as a highly effective approach to understand policy documents, news, and unstructured data to improve the ability to make decisions. By helping students, researchers, and policymakers understand these AI powered techniques that optimize trade, resource management and labor, these scalable solutions to sustainable development are available. This study touts data driven innovation as a critical means to solve global challenges, well-equipped readers with the skills and knowledge to leverage AI for economic progress in a geography of the dynamic and connected.
Journal of Economics, Finance and Accounting Studies, 2024
In this paper, we explore the use of different machine learning models on predicting currency exc... more In this paper, we explore the use of different machine learning models on predicting currency exchange rates among BRICS economies (Brazil, Russia, India, China and South Africa). With global economic uncertainties rising, forecasting trends of currency becomes more accurate and real time important for policymakers, businesses, and investors. This study utilizes the recent progress in ML algorithms, i.e. Long Short Term Memory (LSTM) networks and the ensemble method of XGBoost, to analyze the history exchange rate data along with macroeconomic projections. These models are then evaluated for their performance against these non-linearities and dynamism in the data and provide a significantly better performance over traditional econometric techniques. The research integrates large scale datasets with real world economic parameters and demonstrates how AI driven forecasting might reduce risks in foreign exchange markets. The results show better accuracy and reliability as compared to other tools, which make BRICS countries’ currency stability better managed by such a tool. The results have both academic and practical implications, highlighting the ways in which intelligent systems can transform economic decision making in emerging markets. Additionally, this work provides educational insight into the nature of machine learning as a transformational tool for financial forecasting. Research on ways to incorporate techniques such as using LSTM networks that do particularly well in capturing temporal dependence in sequential data and XGBoost, a technique that customers' data has proven to outperform on a wide variety of data structure types. We find that exploring how these models find patterns in massive datasets and how they outperform traditional models like ARIMA can be beneficial to educators and students alike. This work also calls attention to the utility of feature selection and hyper parameter tuning to increase the prediction accuracy. This paper bridges the gap between theory and implementation by providing a foundational start point for those who wish to apply ML to real world financial problems.
Journal of Environmental Science and Economics, 2024
This study investigates the impact of Artificial Intelligence (AI) innovation on the ecological f... more This study investigates the impact of Artificial Intelligence (AI) innovation on the ecological footprint in the Nordic region from 1990 to 2020, alongside the effects of banking development, stock market capitalization, economic growth, and urbanization. Utilizing the STIRPAT model, the study incorporates cross-sectional dependence and slope homogeneity tests, revealing issues of heterogeneity and cross-sectional dependence. The analysis employs both first and second-generation panel unit root tests, confirming that the variables are free from unit root problems. Panel cointegration tests demonstrate that the variables are cointegrated in the long run. To explore the short-and long-term relationships, the study utilizes the Panel Autoregressive Distributed Lag (ARDL) model. The Panel ARDL results indicate that economic growth, stock market capitalization, and urbanization positively correlate with the ecological footprint in both the short and long run. Conversely, AI innovation and banking development negatively correlate with the ecological footprint. To validate the Panel ARDL estimations, robustness checks are performed using Fully Modified OLS, Dynamic OLS, and Fixed Effects with OLS, all of which support the initial findings. Furthermore, the study employs the D-H causality test to identify causal relationships. The results show a unidirectional causal relationship between AI innovation, stock market capitalization, urbanization, and the ecological footprint. In contrast, a bidirectional causal relationship exists between economic growth and the ecological footprint, as well as between banking development and the ecological footprint.
Journal of Environmental Science and Economics, 2024
The need for sustainable solutions has increased globally as a result of the growing environmenta... more The need for sustainable solutions has increased globally as a result of the growing environmental problems brought about by urbanization and industrialization. Given this, private investment in artificial intelligence (AI) has become a viable means of promoting environmental sustainability, mainly because of AI's capacity to minimize ecological footprints and maximize resource utilization. This research investigates the role of private investment in AI in promoting environmental sustainability in the United States from 1990 to 2019. It also analyzes the impact of financial globalization, technological innovation, and urbanization by testing the Load Capacity Curve (LCC) hypothesis. The research utilizes stationarity tests, which indicate that the variables are free from unit root problems and exhibit mixed orders of integration. Using the Autoregressive Distributive Lag (ARDL) Model bound test, the analysis finds that the variables are cointegrated in the long run. The short-run and long-run estimations of the ARDL model confirm the existence of the LCC hypothesis in the United States, revealing a U-shaped association between income and load capacity factor. The findings show that private investment in AI has a significant positive correlation with the load capacity factor, thus promoting environmental sustainability. Conversely, technological innovation and financial globalization exhibit a negative correlation with the load capacity factor in both the short and long run. To validate the ARDL estimation approach, the study employs Fully Modified OLS, Dynamic OLS, and Canonical Correlation Regression estimation methods, all of which support the ARDL outcomes. Additionally, the Granger Causality test reveals a unidirectional causal connection from private investment in AI, financial globalization, economic growth, technological innovation, and urbanization to the load capacity factor.
Journal of Environmental Science and Economics, 2024
In response to increasing environmental challenges, the United States has deliberately adopted te... more In response to increasing environmental challenges, the United States has deliberately adopted technical advancements to promote sustainable development. This includes efforts to decrease pollution, improve energy efficiency, and encourage the use of environmentally friendly technology in different industries. This study investigates the role of Artificial Intelligence (AI) technology in promoting environmental sustainability in the United States from 1990 to 2019. It also examines the impacts of financial development, ICT use, and economic growth on the Load Capacity Factor (LCF). Various unit root tests revealed no unit root issues and mixed integration orders among variables. The Autoregressive Distributive Lag (ARDL) model explored cointegration, indicating long-run relationships among the variables. The ARDL findings confirm the Load Capacity Curve hypothesis for the United States, with AI technology and ICT use positively correlating with LCF in both the short and long run. Conversely, financial development and population growth significantly reduce LCF. Robustness checks using FMOLS, DOLS, and CCR estimation approaches align with the ARDL results. Granger causality tests reveal unidirectional causality from economic growth, AI, financial development, and ICT use to LCF and bidirectional causality between population and LCF. Diagnostic tests confirm the results are free from heterogeneity, serial correlation, and specification errors. This study underscores the importance of AI and ICT in enhancing environmental sustainability while highlighting the adverse impacts of financial development and population growth on LCF.
Global Sustainability Research, 2024
This study investigates the role of Artificial Intelligence (AI) in promoting a sustainable envir... more This study investigates the role of Artificial Intelligence (AI) in promoting a sustainable environment within the G-7 countries by testing the Load Capacity Curve (LCC) hypothesis. Additionally, it examines the effects of financial development, the digital economy, and urbanization on the load capacity factor using data from 2010 to 2022. The research employs cross-sectional dependence and slope homogeneity tests, revealing issues of crosssectional dependence and heterogeneity. Panel unit root tests, both first and second generation, confirm that the variables are free from unit root problems. Furthermore, panel cointegration tests indicate that the variables are cointegrated in the long run. To assess the impact of the explanatory variables on the load capacity factor, the study utilizes the Method of Moments Quantile Regression (MMQR). The findings reveal a U-shaped relationship between income and the load capacity factor, supporting the LCC hypothesis in the G-7 region. The results also indicate that AI innovation and financial development have a significant positive correlation with the load capacity factor. In contrast, the digital economy and urbanization are found to significantly reduce the load capacity factor. Robustness checks, including the Driscoll-Kraay standard error, Augmented Mean Group, and Common Correlated Effect Mean Group estimation approaches, validate the findings obtained from the MMQR method. Moreover, the Dumitrescu-Hurlin (D-H) causality assessment is utilized to explore the causal connections between variables. The results reveal a unidirectional causal relationship between income and the load capacity factor. Additionally, bidirectional causal relationships are the remaining explanatory variables and load capacity factors.
Journal of Environmental Science and Economics, 2024
This investigation analyses the influence of private AI investment and financial development (FD)... more This investigation analyses the influence of private AI investment and financial development (FD) on CO2 emissions in the United States, using the STIRPAT framework to account for the functions of GDP, population, and foreign direct investment (FDI). The data's robustness was verified through the application of a variety of unit root tests, which confirmed that the variables are free of unit root issues and exhibit a varied order of integration. The ARDL bound test was used to investigate the cointegration among the variables and it found a long-run equilibrium relationship. The ARDL model results show that income, FDI, FD, and population significantly increase CO2 emissions in both the short and long term. In contrast, we found that private investment in AI led to a significant reduction in CO2 emissions over these time frames. Additional estimations were conducted using FMOLS, DOLS, and CCR methods to verify the ARDL results, all of which attested to the initial findings' robustness. In addition, the study implemented a pairwise Granger causality test to illustrate the directional relationships between the variables. There is a unidirectional causal link between GDP, private AI investment, FDI, population, and CO2 emissions, according to the findings. Most notably, we observed bidirectional causality between CO2 emissions and FD. Diagnostic tests further corroborated the validity of the study's conclusions, confirming that the model is free from specification errors, serial correlation, and heteroscedasticity.
Global Sustainability Research, 2024
This study investigates the impact of AI innovation on environmental sustainability in the G-7 re... more This study investigates the impact of AI innovation on environmental sustainability in the G-7 region from 2010 to 2022. Additionally, it tests the Load Capacity Curve (LCC) hypothesis in relation to financial accessibility,
globalization, and urbanization. Cross-sectional dependence and slope homogeneity tests reveal the presence of cross-sectional dependence and heterogeneity issues. Panel unit root and panel cointegration tests confirm that the variables are free from unit root problems and are cointegrated in the long run. To identify significant factors influencing environmental sustainability, this study employs Panel ARDL and Quantile Regression methods. Both methods confirm the LCC hypothesis in the G-7 region, demonstrating a U-shaped relationship between income and the load capacity factor. The results indicate that AI innovation and financial accessibility are significantly positively correlated with the load capacity factor, while globalization and urbanization are negatively correlated, leading to lower environmental sustainability. To validate the robustness of the Panel
ARDL and Quantile Regression results, Driscoll-Kraay standard errors, Augmented Mean Group, and Common Correlated Effects Mean Group estimation approaches are applied, all of which support the initial findings.
Furthermore, the D-H causality test reveals unidirectional causality from economic growth, financial accessibility, globalization, and urbanization to the load capacity factor, and bidirectional causality between AI innovation and the load capacity factor.
Journal of Computer Science and Technology Studies, 2024
Cardiovascular vascular disease (CVD) is a leading cause of death in the world. By 2025, it is es... more Cardiovascular vascular disease (CVD) is a leading cause of death in the world. By 2025, it is estimated that 23.6 million people will be attacked by CVD. Thus, the health care industry is established in order to collect a large number of CVD information of cardiovascular disease and to support doctors in finding and recognizing its potential risk factors of CVD through mining and analyzing the information. This structured and unstructured case information may be used to find out potential patterns of diseases and symptoms using deep learning algorithms. The risk factors associated with cardiovascular disease are known from epidemiology, and this is the first prospective investigation on the condition in the community free mobility population. Physicians can anticipate cardiovascular disease and take early action if it can be predicted. Clinical data analysis is therefore considered to be the prediction of cardiovascular disease. The main contents and contributions are as follows: We start off with trying to predict using the classic data mining and machine learning techniques, and the results are not at all ideal. Generally speaking, after analysis, it is mainly caused by imbalance of data sets. To solve the problem of CVD data imbalance, a SMOTE oversampling method is proposed aiming at the imbalance of cardiovascular data collected in the Framingham community. In order to ensure accurate data collection during operation, the relationship between LSTM(P) and unit state is then tried, and the prediction technique of cardiovascular disease utilising LSTM(P) is realised. Lastly, tests are conducted to confirm the 4434 individuals' initial medical information in the data set. The algorithm has an MCC score of 0.96 and an accuracy of about 94%.
Journal of Computer Science and Technology Studies, 2024
Skin cancer of the most dangerous type, melanoma, requires an early and accurate diagnosis for it... more Skin cancer of the most dangerous type, melanoma, requires an early and accurate diagnosis for its treatment to reduce mortality and increase the number of positive outcomes. Even with the availability of better imaging and diagnostic techniques, it is still difficult to differentiate between benign lesions and malignant melanoma because of overlapping features, noisy images and images with artefacts such as hair and glare. To overcome these challenges, this research adopts deep learning models to classify skin lesions based on images from the ISIC Archive dataset. The study establishes a strong two-stage classification framework. Therefore, noise reduction, ROI cutting, and data enhancement techniques are used for data pre-processing. Second, lesion classification is performed using a ResNet-based convolutional neural network (CNN) architecture. The model is trained and validated on a balanced dataset that contains an equal number of benign and malignant lesion categories. Using accuracy, precision, recall, F1 score and AUC, the system can be assessed and compared to other state-of-art approaches. The findings show that the proposed model has a high level of classification performance and a high level of discriminative ability between melanoma and benign lesions. The ROC curve effectively exhibits the model's performance and accuracy, and the confusion matrix reveals tendencies to misclassify and where it should be improved. The application of sophisticated preprocessing methods improves model performance, responding to the issues arising from the presence of noise in data. This research is valuable to the field of dermatological diagnostics as it offers a scalable, automated means of skin lesion classification. The proposed framework can be applied clinically to assist dermatologists in the detection of early melanoma and, therefore benefit patients. Subsequent studies will address the development of combined approaches and the improvement of interpretability aids in order to increase the diagnostic accuracy and practical applicability of the methods in clinical practice.
Journal of Computer Science and Technology Studies, 2024
In this study, six convolutional neural network (CNN) architectures, VGG16, Inception-v3, ResNet,... more In this study, six convolutional neural network (CNN) architectures, VGG16, Inception-v3, ResNet, MobileNet, NasNet, and EfficientNet are tested on classifying dermatological lesions. The research preprocesses and features extracts skin lesions data to achieve an accurate skin lesion classification in employing two benchmark datasets, HAM10000 and ISIC-2019. The CNN models then extract features from the filtered, resized images (uniform dimensions: 128 × 128 × 3 pixels). These results show that EfficientNet consistently achieves higher accuracy, precision, recall, and F1-score than any other model on melanoma, basal cell carcinoma and actinic keratoses, with 94.0%, 92.0%, 93.8%, respectively. The competitive performance of NasNet is also demonstrated for eczema and psoriasis. This study concludes that proper preprocessing and optimized CNN architecture are important for dermatological image classification. The results are promising, however, challenges such as the imbalance in the datasets and the requirement for larger ethically gathered datasets exist. For future work, dataset diversity will be improved, along with model generalization, through interdisciplinary collaboration and advanced CNN architectures.
Institute of Electrical and Electronics Engineers (IEEE), 2024
The main issue is undocumented migration, it presents a significant concern to public health due ... more The main issue is undocumented migration, it presents a significant concern to public health due to the potential spread of infectious illnesses and overall health risks. Data was gathered between January 2018 and December 2022. The present study employs contemporary technology methodologies, including big data analytics and machine learning, to better understand health risks and the dissemination patterns of prominent diseases among undocumented immigrants. The main infectious diseases, included Tuberculosis, hepatitis, and chronic disorders. Interpretation: The study findings indicate a greater likelihood of specific contagious diseases among undocumented immigrants. The applied machine learning models successfully obtained diseases health risk information. According to our experimental analysis, the Convolutional Neural Network and K-Nearest Neighbors model achieved the highest performance with 90% Precision, 90% Recall, and 80% F1-Score, respectively, compared to other machine learning techniques - Logistic Regression (80% Precision, 80% Recall, and a 70% F1-Score), Decision Tree (70% Precision, 60% Recall, 70% F1-Score), and Naïve Bayes (50% Precision, 80% Recall, and 60% F1-Score). The study will be helpful in promoting scrutiny in analyzing public health records and understanding of diseases associated with undocumented immigrants in the U.S.
Institute of Electrical and Electronics Engineers(IEEE), 2024
In diagnostics, accurate and timely identification of brain tumors can influence the outcome of t... more In diagnostics, accurate and timely identification of brain tumors can influence the outcome of the patient's treatment plan and prognosis. This research proposes RanMer-Former, a novel model combining Vision Transformers (ViTs), Explainable AI (XAI) with Grad-CAM, and token merging methods for effective MRI-based brain tumor detection. The dataset comprises 7,023 MRI scans across four categories: Thus, it has been classified as either having Glioma, Meningioma, Pituitary tumors or No Tumor. RanMerFormer outperformed a baseline CNN model, achieving an accuracy of 89.7%, precision of 90.1%, recall of 89.5%, and an F1 score of 89.8%. The Grad-CAM visualizations provided confirmation to the rationale made by the model to focus on certain regions of the tumor. This research demonstrates the application of RanMerFormer in clinical practice and suggests an effective approach to diagnose brain tumors.
Journal of Computer Science and Technology Studies, 2024
Parkinson’s Disease (PD) is a serious chronic illness known to slow the motor function of a human... more Parkinson’s Disease (PD) is a serious chronic illness known to slow the motor function of a human being as it affects movement and speech. There are significant benefits of early diagnosis of the disorder and it is essential that PD is diagnosed as early as possible. This paper assesses the applicability of stateof-art vocal features which are Vocal Tract Length Normalization (VTLN), Empirical Mode Decomposition (EMD), and Continuous Wavelet Transform (CWT) in combination with the recent Machine Learning (ML) algorithm for the identification of PD. Hence, we performed the performance assessment of different types of models such as Explainable Boosting Machine (EBM), Fast and Lightweight AutoML (FLAML), as well as NGBoost using 195 recorded vocal data sets. EBM was found to be the model with the highest accuracy of 86. 67%, and the AUC was 87. 33% for the same model and FLAML demonstrated a sensitivity score of 100%. The results of this work shed light on how sufficient analysis of the vocal material may be effectively combined with the contemporary ML algorithms to enhance the accuracy of PD identification.
Journal of Computer Science and Technology Studies, 2024
This paper aims to provide a systematic review of the state of the art in the use of deep neural ... more This paper aims to provide a systematic review of the state of the art in the use of deep neural networks (DNNs) in medical imaging, an area that has been recently developed because of the emergence of artificial intelligence (AI) and machine learning (ML). Deep Neural Networks including Convolutional Neural Networks (CNNs) and Generative Adversarial Networks (GANs) have shown excellence in handling of a gigantic imaging data and assisting in diagnostics, treatment planning and patient care. This review also focuses on the breakthroughs of DNNs in different imaging tasks including classification, segmentation, registration, and detection, and has highlighted its potential to enhance diagnostic accuracy in different organs like the brain, lung and chest. Some of the key problems related to DNN deployment are also considered, including the problems that arise due to limitations of data, computational power, and model interpretability. That is why innovations such as transfer learning and synthetic data acquisition contributed to reducing these problems, thereby improving model performance with limited data. The paper concludes by discussing future works where the emphasis is made on the higher interpretability of the models and the combination of clinical records with images. In this paper, we attempt to offer a comprehensive review of the latest developments in DNNs in medical image analysis and delineate potential research directions to help researchers and practitioners interested in applying DNNs for medical imaging tasks.
Elsevier, 2024
Lung cancer is one of the most frequent causes of death globally, mainly attributed to inadequate... more Lung cancer is one of the most frequent causes of death globally, mainly attributed to inadequate diagnostic methods and treatment opportunities. Detecting diseases at an early stage greatly improves the chances of survival, and even higher accuracy can be achieved with the help of machine learning tools. The work described in this study proposes the development of an optimized model for predicting lung cancer using deep learning and assembly on the public dataset. The models used include ResNet101 and VER-Net, as well as an ensemble model, CatBoost, and Random Forest with data oversampling done through ADASYN. To compare the effectiveness of the models, evaluation measures included Accuracy, Precision, Recall, F1-Score, and AUC. Among all the models, VER-Net reveals the highest results with an Accuracy of 97%, AUC of 98%, and Precision of 100%. These results suggest how our models could aid in early lung cancer detection. This approach will be extended to other health conditions in future work, and it will attempt to augment the models' performance by increasing the size of the data set and exploring federated learning for data privacy purposes.
International Journal of Recent Scientific Research, 2018
Data mining is the process of discovering the knowledge by analysing and extracting the data from... more Data mining is the process of discovering the knowledge by analysing and extracting the data from various highly repositories and the result bases on the useful and functional information's for the user. By the passage of time data mining is growing very vastly and became the famous technology by analysing and extraction of knowledge. We are standing at the point where life can have a better understanding of the problems. There was a time to start an active research on data mining but the limitation of this technology is under predictions as; is this technology has any limits for the future or it is limitless towards the growing world? Why only data mining technology is involves in the refining process of data? How efficiently the future relies on this technology? Is this new technology is so capable of being popular and more powerful in all respective fields? What are its limitations and how it is dominating the future? In this paper we have revealed the facts of growing fields with this manifesto and how it is affecting anonymously and how reliable the future is on this technology? And why this technology is so important for future? et.al, 1996 5).(Rygielski et.al, 2002) narrates as "Data mining is a set of methods used in the knowledge discovery process to distinguish previously unknown relationships and patterns within the data" 6 .The most dynamic definition of data mining according to the (Prabakaran et.al, 2018 7), "data mining is the process of Decision Tree, Neural Networks, Rule Inductions, Neatest Neighbours and Genetic Algorithms".
IJCSNS International Journal of Computer Science and Network Security, 2019
Since the last decade, data mining is contributing nicely and becoming very rich in different ind... more Since the last decade, data mining is contributing nicely and becoming very rich in different industries, especially data mining is working tremendously in healthcare section to predict the different life taking diseases on time. The introduced applications in data mining can be classified in to two subsets: the set of policy development and the set of decision-making or the decision support. By the huge amount of work in data mining but still it is hard to discover the priceless literature in the area of this technology under healthcare. During the research, number of papers has been reconsidered in this manifesto and comprises them in to the single platform for better reviews and understanding in term of saving time and efforts in terms of schemes, algorithms, techniques and results. This paper comprises the contribution of various researchers of healthcare sectors in a single platform under data mining categorization for predictions and decision-making by elaborating and mentioning different techniques and algorithms working out since decade to cure the life taking diseases like Breast cancer, Heart attacks, HIV, Lungs cancer, Skin infections with their mean accuracy by defining various studies.
Journal of Biotech Research, 2019
The advanced approach to modeling the logistic regression with fractional polynomials is applied ... more The advanced approach to modeling the logistic regression with fractional polynomials is applied in place of the traditional linear predictors to group the continuous covariates for the healthcare dataset. The real-life data obtained from the 500 of diabetic patients in northwestern Nigeria. The statistical modeling and predictions of finding the group covariates analytically based on the patients’ variables, age and the occupation, by the theories of “Royston and Altman” and “Royston and Sauerbrei”. The algorithm in terms of the selection for key factors with the properties congregates at ϕ (3, 3) with the deviance ratio of 113.00 and the log likelihood assessment of -56.50 for the model results of patients’ age. For the patients’ occupation, the algorithm for the key factors with
extensive outcomes converged at ϕ (-2, 3) with the deviance ratio of 111.36 and the log likelihood assessment of -56.43. The analysis modeling approach for the second standard method with the fractional polynomials provides the excellent results on the healthcare dataset to investigate the diabetic status. The method is also sufficient for
the metadata of different disease because it produces the minimum deviance and maximum log likelihood values.
Journal of Economics, Finance and Accounting Studies, 2024
In this paper we examine the role of machine learning (ML) in predicting stock market trends with... more In this paper we examine the role of machine learning (ML) in predicting stock market trends within BRICS economies. Complex, interdependent global and regional economic factors are today and will in the future increasingly influence stock markets, which necessitates innovative techniques for trend analysis. Using this state of the art ML models Support Vector Machines (SVMs), Random Forests, and neural networks the study predicts market fluctuations based on historical stock data, economic indicators and geopolitical events. This research emphasizes the increasing role of deep learning, especially with models such as Transformers and LSTMs, to meet the demand for high accuracy predictive systems in the volatile market. Its analysis brings model performance into comparison of BRICS nations, taking into account the peculiar financial and economic behavior peculiar to each of them. These results illuminate how ML can provide actionable intelligence for investors and policymakers to better manage risk and better make strategic investments. Findings from the study underscore the requirement for adopting sophisticated data driven tools in order to negotiate the intricacies of globalized financial systems. This study also explains the basis in helping us understand how machine learning changes the perspective on stock market analysis. It is a great source to understand how different ML techniques such as Support Vector Machines which are good at classification problems and Random Forests, which are known to handle over fitting, can be used on a financial dataset. It shows cutting edge tools for market prediction such as deep learning models like LSTMs, which are able to handle sequential time series data, or Transformers that further improve the traditional architectures with attention mechanisms. The paper also discusses data preprocessing methods, such as feature engineering and normalization, and the importance of their inclusion in improving model performance. This research shows the value of ML literacy and provides future financial analysts and decision makers with tools for addressing market volatility in a data driven and strategic context.
Journal of Economics, Finance and Accounting Studies, 2025
This paper analyzes how the Artificial Intelligence (AI) and Machine Learning (ML) are bridging t... more This paper analyzes how the Artificial Intelligence (AI) and Machine Learning (ML) are bridging the gap between economic growth in the BRICS countries. BRICS countries are emerging economies that are challenged by increasing income inequality, industrial transformation and the need for infrastructure development. Driven by AI, this study applies data analytics to macroeconomic datasets, tracking down patterns and functional takeaways regarding policy formulation and strategic decision making. The research employs techniques, including predictive modeling, clustering, and natural language processing (NLP), in areas such as trade optimization, resource allocation and labour market analysis. Case examples document successful introduction of AI systems to solve critical economic problems, from increasing healthcare access to raising productivity in agriculture. The findings illustrate the role of AI and ML in helping BRICS policymakers to an informed, data driven development. The research puts AI as core to the process of economic advancement, a solution to developmental gaps and a driver for growth. This research contributes both to its practical outcomes and by providing insights into how AI and ML can solve the complex economic problems of emerging markets. The paper introduces predictive modeling, which anticipates economic trends based on past data and clustering which groups similar economic behaviors to find patterns as tools that are important in economic analysis. Further, Natural Language Processing (NLP) is covered as a highly effective approach to understand policy documents, news, and unstructured data to improve the ability to make decisions. By helping students, researchers, and policymakers understand these AI powered techniques that optimize trade, resource management and labor, these scalable solutions to sustainable development are available. This study touts data driven innovation as a critical means to solve global challenges, well-equipped readers with the skills and knowledge to leverage AI for economic progress in a geography of the dynamic and connected.
Journal of Economics, Finance and Accounting Studies, 2024
In this paper, we explore the use of different machine learning models on predicting currency exc... more In this paper, we explore the use of different machine learning models on predicting currency exchange rates among BRICS economies (Brazil, Russia, India, China and South Africa). With global economic uncertainties rising, forecasting trends of currency becomes more accurate and real time important for policymakers, businesses, and investors. This study utilizes the recent progress in ML algorithms, i.e. Long Short Term Memory (LSTM) networks and the ensemble method of XGBoost, to analyze the history exchange rate data along with macroeconomic projections. These models are then evaluated for their performance against these non-linearities and dynamism in the data and provide a significantly better performance over traditional econometric techniques. The research integrates large scale datasets with real world economic parameters and demonstrates how AI driven forecasting might reduce risks in foreign exchange markets. The results show better accuracy and reliability as compared to other tools, which make BRICS countries’ currency stability better managed by such a tool. The results have both academic and practical implications, highlighting the ways in which intelligent systems can transform economic decision making in emerging markets. Additionally, this work provides educational insight into the nature of machine learning as a transformational tool for financial forecasting. Research on ways to incorporate techniques such as using LSTM networks that do particularly well in capturing temporal dependence in sequential data and XGBoost, a technique that customers' data has proven to outperform on a wide variety of data structure types. We find that exploring how these models find patterns in massive datasets and how they outperform traditional models like ARIMA can be beneficial to educators and students alike. This work also calls attention to the utility of feature selection and hyper parameter tuning to increase the prediction accuracy. This paper bridges the gap between theory and implementation by providing a foundational start point for those who wish to apply ML to real world financial problems.
Journal of Environmental Science and Economics, 2024
This study investigates the impact of Artificial Intelligence (AI) innovation on the ecological f... more This study investigates the impact of Artificial Intelligence (AI) innovation on the ecological footprint in the Nordic region from 1990 to 2020, alongside the effects of banking development, stock market capitalization, economic growth, and urbanization. Utilizing the STIRPAT model, the study incorporates cross-sectional dependence and slope homogeneity tests, revealing issues of heterogeneity and cross-sectional dependence. The analysis employs both first and second-generation panel unit root tests, confirming that the variables are free from unit root problems. Panel cointegration tests demonstrate that the variables are cointegrated in the long run. To explore the short-and long-term relationships, the study utilizes the Panel Autoregressive Distributed Lag (ARDL) model. The Panel ARDL results indicate that economic growth, stock market capitalization, and urbanization positively correlate with the ecological footprint in both the short and long run. Conversely, AI innovation and banking development negatively correlate with the ecological footprint. To validate the Panel ARDL estimations, robustness checks are performed using Fully Modified OLS, Dynamic OLS, and Fixed Effects with OLS, all of which support the initial findings. Furthermore, the study employs the D-H causality test to identify causal relationships. The results show a unidirectional causal relationship between AI innovation, stock market capitalization, urbanization, and the ecological footprint. In contrast, a bidirectional causal relationship exists between economic growth and the ecological footprint, as well as between banking development and the ecological footprint.
Journal of Environmental Science and Economics, 2024
The need for sustainable solutions has increased globally as a result of the growing environmenta... more The need for sustainable solutions has increased globally as a result of the growing environmental problems brought about by urbanization and industrialization. Given this, private investment in artificial intelligence (AI) has become a viable means of promoting environmental sustainability, mainly because of AI's capacity to minimize ecological footprints and maximize resource utilization. This research investigates the role of private investment in AI in promoting environmental sustainability in the United States from 1990 to 2019. It also analyzes the impact of financial globalization, technological innovation, and urbanization by testing the Load Capacity Curve (LCC) hypothesis. The research utilizes stationarity tests, which indicate that the variables are free from unit root problems and exhibit mixed orders of integration. Using the Autoregressive Distributive Lag (ARDL) Model bound test, the analysis finds that the variables are cointegrated in the long run. The short-run and long-run estimations of the ARDL model confirm the existence of the LCC hypothesis in the United States, revealing a U-shaped association between income and load capacity factor. The findings show that private investment in AI has a significant positive correlation with the load capacity factor, thus promoting environmental sustainability. Conversely, technological innovation and financial globalization exhibit a negative correlation with the load capacity factor in both the short and long run. To validate the ARDL estimation approach, the study employs Fully Modified OLS, Dynamic OLS, and Canonical Correlation Regression estimation methods, all of which support the ARDL outcomes. Additionally, the Granger Causality test reveals a unidirectional causal connection from private investment in AI, financial globalization, economic growth, technological innovation, and urbanization to the load capacity factor.
Journal of Environmental Science and Economics, 2024
In response to increasing environmental challenges, the United States has deliberately adopted te... more In response to increasing environmental challenges, the United States has deliberately adopted technical advancements to promote sustainable development. This includes efforts to decrease pollution, improve energy efficiency, and encourage the use of environmentally friendly technology in different industries. This study investigates the role of Artificial Intelligence (AI) technology in promoting environmental sustainability in the United States from 1990 to 2019. It also examines the impacts of financial development, ICT use, and economic growth on the Load Capacity Factor (LCF). Various unit root tests revealed no unit root issues and mixed integration orders among variables. The Autoregressive Distributive Lag (ARDL) model explored cointegration, indicating long-run relationships among the variables. The ARDL findings confirm the Load Capacity Curve hypothesis for the United States, with AI technology and ICT use positively correlating with LCF in both the short and long run. Conversely, financial development and population growth significantly reduce LCF. Robustness checks using FMOLS, DOLS, and CCR estimation approaches align with the ARDL results. Granger causality tests reveal unidirectional causality from economic growth, AI, financial development, and ICT use to LCF and bidirectional causality between population and LCF. Diagnostic tests confirm the results are free from heterogeneity, serial correlation, and specification errors. This study underscores the importance of AI and ICT in enhancing environmental sustainability while highlighting the adverse impacts of financial development and population growth on LCF.
Global Sustainability Research, 2024
This study investigates the role of Artificial Intelligence (AI) in promoting a sustainable envir... more This study investigates the role of Artificial Intelligence (AI) in promoting a sustainable environment within the G-7 countries by testing the Load Capacity Curve (LCC) hypothesis. Additionally, it examines the effects of financial development, the digital economy, and urbanization on the load capacity factor using data from 2010 to 2022. The research employs cross-sectional dependence and slope homogeneity tests, revealing issues of crosssectional dependence and heterogeneity. Panel unit root tests, both first and second generation, confirm that the variables are free from unit root problems. Furthermore, panel cointegration tests indicate that the variables are cointegrated in the long run. To assess the impact of the explanatory variables on the load capacity factor, the study utilizes the Method of Moments Quantile Regression (MMQR). The findings reveal a U-shaped relationship between income and the load capacity factor, supporting the LCC hypothesis in the G-7 region. The results also indicate that AI innovation and financial development have a significant positive correlation with the load capacity factor. In contrast, the digital economy and urbanization are found to significantly reduce the load capacity factor. Robustness checks, including the Driscoll-Kraay standard error, Augmented Mean Group, and Common Correlated Effect Mean Group estimation approaches, validate the findings obtained from the MMQR method. Moreover, the Dumitrescu-Hurlin (D-H) causality assessment is utilized to explore the causal connections between variables. The results reveal a unidirectional causal relationship between income and the load capacity factor. Additionally, bidirectional causal relationships are the remaining explanatory variables and load capacity factors.
Journal of Environmental Science and Economics, 2024
This investigation analyses the influence of private AI investment and financial development (FD)... more This investigation analyses the influence of private AI investment and financial development (FD) on CO2 emissions in the United States, using the STIRPAT framework to account for the functions of GDP, population, and foreign direct investment (FDI). The data's robustness was verified through the application of a variety of unit root tests, which confirmed that the variables are free of unit root issues and exhibit a varied order of integration. The ARDL bound test was used to investigate the cointegration among the variables and it found a long-run equilibrium relationship. The ARDL model results show that income, FDI, FD, and population significantly increase CO2 emissions in both the short and long term. In contrast, we found that private investment in AI led to a significant reduction in CO2 emissions over these time frames. Additional estimations were conducted using FMOLS, DOLS, and CCR methods to verify the ARDL results, all of which attested to the initial findings' robustness. In addition, the study implemented a pairwise Granger causality test to illustrate the directional relationships between the variables. There is a unidirectional causal link between GDP, private AI investment, FDI, population, and CO2 emissions, according to the findings. Most notably, we observed bidirectional causality between CO2 emissions and FD. Diagnostic tests further corroborated the validity of the study's conclusions, confirming that the model is free from specification errors, serial correlation, and heteroscedasticity.
Global Sustainability Research, 2024
This study investigates the impact of AI innovation on environmental sustainability in the G-7 re... more This study investigates the impact of AI innovation on environmental sustainability in the G-7 region from 2010 to 2022. Additionally, it tests the Load Capacity Curve (LCC) hypothesis in relation to financial accessibility,
globalization, and urbanization. Cross-sectional dependence and slope homogeneity tests reveal the presence of cross-sectional dependence and heterogeneity issues. Panel unit root and panel cointegration tests confirm that the variables are free from unit root problems and are cointegrated in the long run. To identify significant factors influencing environmental sustainability, this study employs Panel ARDL and Quantile Regression methods. Both methods confirm the LCC hypothesis in the G-7 region, demonstrating a U-shaped relationship between income and the load capacity factor. The results indicate that AI innovation and financial accessibility are significantly positively correlated with the load capacity factor, while globalization and urbanization are negatively correlated, leading to lower environmental sustainability. To validate the robustness of the Panel
ARDL and Quantile Regression results, Driscoll-Kraay standard errors, Augmented Mean Group, and Common Correlated Effects Mean Group estimation approaches are applied, all of which support the initial findings.
Furthermore, the D-H causality test reveals unidirectional causality from economic growth, financial accessibility, globalization, and urbanization to the load capacity factor, and bidirectional causality between AI innovation and the load capacity factor.