Fog Computing Research Papers - Academia.edu (original) (raw)

2025, International Journal of Electrical and Electronics Engineering (SSRG)

This paper presents a Trust Management System (TMS) designed to counteract cyber-attacks in fog computing environments. The system integrates fuzzy AHP, hierarchical PROMETHEE methods, and fuzzy ranking to evaluate trust based on Quality... more

This paper presents a Trust Management System (TMS) designed to counteract cyber-attacks in fog computing environments. The system integrates fuzzy AHP, hierarchical PROMETHEE methods, and fuzzy ranking to evaluate trust based on Quality of Service (QoS), Quality of Security (QoSec), and economic factors. Tested against Replay, On-Off, Bad-mouthing, and Ransomware attacks, the system demonstrates high detection accuracy, with error rates between 3.50% and 4.15%. The results show that the proposed TMS effectively enhances security and trust evaluation in fog computing networks.

2025, International Journal on Recent and Innovation Trends in Computing and Communication

Hadoop is a Java-based programming framework which supports for storing and processing big data in a distributed computing environment. It is using HDFS for data storing and using Map Reduce to processing that data. Map Reduce has become... more

Hadoop is a Java-based programming framework which supports for storing and processing big data in a distributed computing environment. It is using HDFS for data storing and using Map Reduce to processing that data. Map Reduce has become an important distributed processing model for large-scale data-intensive applications like data mining and web indexing. Map Reduce is widely used for short jobs requiring low response time. The current Hadoop implementation assumes that computing nodes in a cluster are homogeneous in nature. Unfortunately, both the homogeneity and data locality assumptions are not satisfied in virtualized data centers. Hadoop's scheduler can cause severe performance degradation in heterogeneous environments. We observe that, Longest Approximate Time to End (LATE), which is highly robust to heterogeneity. LATE can improve Hadoop response times by a factor of 2 in clusters.

2025, Journal of Electrical System

The Internet of Things (IoT) has become indispensable for reducing human intervention by interconnecting smart devices capable of data transmission and reception through the internet. However, the proliferation of IoT devices has led to... more

The Internet of Things (IoT) has become indispensable for reducing human intervention by interconnecting smart devices capable of data transmission and reception through the internet. However, the proliferation of IoT devices has led to heightened concerns regarding security and privacy, particularly in identifying and eliminating compromised or malicious nodes. In response, a light weight trust management system is proposed. Fog Trust features a multi level architecture comprising edge node, a trusted intermediary known as the trust agents, and fog layer. The agent facilitates communication between IoT node and fog layer for computational purposes, alleviating the computational burden on node and ensuring a reliable environment. By calculating the trust degree, the trust agent transmit it to fog layer, that employ encryptions techniques to uphold intigration. The encrypted data is shared with previous trust to add on, enhancing the accuracy of the trust degree. Evaluation of the trust management system approach against potential attack such as GoodMouthig, and Bad-mouthng shows its efficacy in assigning low trust degree to malicious node across different scenario, even when network gets varying proportions of malicious node. .

2025, International Journal of Intelligent Systems and Applications In Engineering

In this research, we investigate how to include a Trust Management System (TMS) into fog computing-a decentralized computing architecture that expands cloud computing's capabilities to the edge of a network. We conduct an inquiry into the... more

In this research, we investigate how to include a Trust Management System (TMS) into fog computing-a decentralized computing architecture that expands cloud computing's capabilities to the edge of a network. We conduct an inquiry into the application and evaluation of a new multi-criteria trust mechanism designed specifically for fog computing settings. This approach, which combines "soft trust" with "hard trust," is essential to assessing and controlling the dependability and credibility of entities in the fog computing environment. We discover that the implementation of trust models in this setting improves the reliability and usefulness of Electroencephalography (EEG) applications in various domains, including neurology and clinical medicine. Additionally, these models aid in the creation of implementations that are safe, intuitive, and compliant with ethical standards.

2025, International Journal on Recent and Innovation Trends in Computing and Communication

As fog computing emerges as a natural extension of cloud computing, its decentralized nature brings numerous advantages, such as reduced latency and enhanced Quality of Service (QoS). However, this paradigm also introduces significant... more

As fog computing emerges as a natural extension of cloud computing, its decentralized nature brings numerous advantages, such as reduced latency and enhanced Quality of Service (QoS). However, this paradigm also introduces significant security and privacy challenges, particularly when fog nodes collaborate and exchange data. In this paper, we propose a robust trust management system that evaluates both Quality of Service (QoS) and Quality of Protection (QoP) metrics from direct and indirect interactions among fog nodes. Our approach helps mitigate security risks posed by potentially malicious nodes by incorporating a predictive trust evaluation system. The proposed system reduces malicious interactions by approximately 66% and enhances response times by reducing latency by around 15 seconds. The findings demonstrate that an effective trust management system is crucial for building secure and reliable fog computing environments.

2025, Tuijin Jishu / Journal of Propulsion Technology

In this research, we investigate how to include a Trust Management System (TMS) into fog computing-a decentralized computing architecture that expands cloud computing's capabilities to the edge of a network. We conduct an inquiry into the... more

In this research, we investigate how to include a Trust Management System (TMS) into fog computing-a decentralized computing architecture that expands cloud computing's capabilities to the edge of a network. We conduct an inquiry into the application and evaluation of a new multi-criteria trust mechanism designed specifically for fog computing settings. This approach, which combines "soft trust" with "hard trust," is essential to assessing and controlling the dependability and credibility of entities in the fog computing environment. We discover that the implementation of trust models in this setting improves the reliability and usefulness of Electroencephalography (EEG) applications in various domains, including neurology and clinical medicine. Additionally, these models aid in the creation of implementations that are safe, intuitive, and compliant with ethical standards.

2025, Harbine Engineering University

An innovative, intriguing paradigm called fog computing has the potential to solve the issues with conventional cloud computing. Fog server are used to process, manage, and store private and sensitive data, therefore security and privacy... more

An innovative, intriguing paradigm called fog computing has the potential to solve the issues with conventional cloud computing. Fog server are used to process, manage, and store private and sensitive data, therefore security and privacy concerns need to be resolved before the fundamental of this certified technology. Putting into place a trust management system is one of these measures. A node's (or trustee's) reliability is evaluated using a set of standards decided upon by the trustor. To enable a trustor to assess the degree to which a parameter contribute the overall trust value of a trustee and if its beneficial to work with the node, it is imperative to identify and prioritise these criteria. The prioritisation of the trust parameter is a multi-task decision-making (MCDM) problem because it calls for the simultaneous consideration of several different criteria. In this study, trust parameters in fog computing are identified and given priority using a fuzzy analytics hierarchical process (Fuzzy-AHP) techniqu. The findings suggest that quality of service (QoS), quality of security (QoSec), and suggestions are the top prioritised parameters that a service requester can use to assess the amount of trust in a service provider. A service provider can use social relationships, which are ranked as the highest level of trust, to assess the degree of truthfulness of a svice requester, whereas reputation of past is least important factor.

2025, Springer

The cloud computing is a technology which connects multiple nodes to each other in network. Cloud computing facilitate sharing of computational task and resources with clients and servers. It also operates in heterogeneous environment but... more

The cloud computing is a technology which connects multiple nodes to each other in network. Cloud computing facilitate sharing of computational task and resources with clients and servers. It also operates in heterogeneous environment but the main issue is privacy and security of data. The data generated by IoT is also very large in size. If all the data is passed to cloud, the latency will increase. Fog computing was introduced to decrease the problems of cloud computing. Cloud computing is not replaced by fog computing but it is a service which enhances the cloud services by separating data from users which is need to live on a node in the outskirts. Fog computing provides the service of cloud computing data centers which collaborate on processing, storage, and networking to end users.

2025, Deep Science Publishing

Cloud Computing is a technological revolution impacting a vast cross-section of the world's economy, especially forms of Information Technology outsourcing. Specifically, Infrastructure-as-a-Service in Financial IT Systems, is the... more

Cloud Computing is a technological revolution impacting a vast cross-section of the world's economy, especially forms of Information Technology outsourcing. Specifically, Infrastructure-as-a-Service in Financial IT Systems, is the providing virtual computing infrastructure dedicated to facilitate businesses dealing with financial transactions such as government banks, private banks, non-banking financial companies, and online transaction platforms. IaaS empowers financial companies with scalable and flexible choices of infrastructure/devices, lowering costs for organizations with unpredictable usage patterns and at the same time highly enhancing reliability by providing services in a centralized manner, focusing on uptime as well as security of computing resources. Consequently, this paper seeks to outline the optimization process for sizing IaaS infrastructure in Financial IT Systems while dealing with financial projects and business scenarios (

2025, Qubahan Academic Journal

Semantic web and cloud technology systems have been critical components in creating and deploying applications in various fields. Although they are self-contained, they can be combined in various ways to create solutions, which has... more

Semantic web and cloud technology systems have been critical components in creating and deploying applications in various fields. Although they are self-contained, they can be combined in various ways to create solutions, which has recently been discussed in depth. We have shown a dramatic increase in new cloud providers, applications, facilities, management systems, data, and so on in recent years, reaching a level of complexity that indicates the need for new technology to address such tremendous, shared, and heterogeneous services and resources. As a result, issues with portability, interoperability, security, selection, negotiation, discovery, and definition of cloud services and resources may arise. Semantic Technologies, which has enormous potential for cloud computing, is a vital way of re-examining these issues. This paper explores and examines the role of Semantic-Web Technology in the Cloud from a variety of sources. In addition, a "cloud-driven" mode of interact...

2025

Real-Time Operating Systems (RTOS) are foundational for time-critical applications such as robotics, automotive systems, and embedded IoT devices. Traditional task scheduling algorithms like Rate Monotonic Scheduling (RMS) and Earliest... more

Real-Time Operating Systems (RTOS) are foundational for time-critical applications such as robotics, automotive systems, and embedded IoT devices. Traditional task scheduling algorithms like Rate Monotonic Scheduling (RMS) and Earliest Deadline First (EDF) offer predictable timing but are limited in adaptability and efficiency under varying workloads. With the advent of Artificial Intelligence (AI) and Machine Learning (ML), there is a growing interest in applying these technologies to optimize RTOS scheduling strategies. This paper presents a theoretical study on integrating ML techniques-such as supervised learning, reinforcement learning, and neural networks-into RTOS task scheduling to improve deadline adherence, CPU utilization, and system responsiveness. The paper also highlights key challenges, research gaps, and future directions in this emerging interdisciplinary domain.

2025, Journal of Emerging Technologies and Innovative Research

The fusion of Cloud Computing and Internet of Things (IoT) has ushered in revolutionary capabilities across different sectors, such as smart cities, healthcare, industrial automation, and agriculture. Yet the two technologies' integration... more

The fusion of Cloud Computing and Internet of Things (IoT) has ushered in revolutionary capabilities across different sectors, such as smart cities, healthcare, industrial automation, and agriculture. Yet the two technologies' integration also entails immense challenges, especially with regard to security and scalability. Since IoT devices produce tremendous volumes of data, the need for cloud infrastructure to process, manage, and store such data in real-time increases every day. This paper introduces an in-depth study on the improvement of security and scalability within cloud computing environments to enable seamless integration of IoT systems. The paper proceeds with an examination of the native weaknesses in cloud-based IoT structures, including unauthorized access, data leaks, identity impersonation, and Denial of Service (DoS) attacks. The paper also explores the shortcomings of available cloud infrastructures to support the continuously growing burden from millions of IoT devices in connection. The paper assesses recent industry trends and research methods, highlighting loopholes and dynamics in different security protocols and scalability measures. To meet these demands, the paper suggests a multi-layered security model that utilizes sophisticated encryption methods, secure authentication protocols, and anomaly detection software to protect information and preserve user privacy. Furthermore, it investigates the application of scalable cloud models like hybrid cloud and edge computing in mitigating latency and enhancing responsiveness for time-critical IoT applications.

2025, Bayero Journal of Pure and Applied Sciences

2025

The smart grid, with its large array of networked devices and bidirectional data flow between the end-users and the grid, presents new requirements in service reliability, communication latency, and data delivery. The traditional TCP/IP... more

The smart grid, with its large array of networked devices and bidirectional data flow between the end-users and the grid, presents new requirements in service reliability, communication latency, and data delivery. The traditional TCP/IP communication paradigm was not designed to handle these requirements at the envisioned scale. This calls for a novel networking paradigm. This paper makes the case for the use of the Information Centric Networking (ICN) paradigm to create the smart grid network architecture. We quantitatively assess the gains resulting from ICN's inherent functionalities, such as concurrent use of multiple interfaces, request aggregation, and stateful forwarding, which enable timely critical message delivery and fast packet re-transmissions. We perform simulations to compare IP and ICN-based smart grid deployments. Our results show that the ICN-based solution outperforms the IP-based solution, especially in a network with packet losses.

2025, IEEE Internet of Things Journal

In today's era of explosion of Internet of Things (IoT) and end-user devices and their data volume, emanating at the network's edge, the network should be more in-tune with meeting the needs of these demanding edge computing applications.... more

In today's era of explosion of Internet of Things (IoT) and end-user devices and their data volume, emanating at the network's edge, the network should be more in-tune with meeting the needs of these demanding edge computing applications. To this end, we design and prototype Information-Centric edge (ICedge), a general-purpose networking framework that streamlines service invocation and improves reuse of redundant computation at the edge. ICedge runs on top of Named-Data Networking, a realization of the Information-Centric Networking vision, and handles the "low-level" network communication on behalf of applications. ICedge features a fully distributed design that: (i) enables users to get seamlessly on-boarded onto an edge network, (ii) delivers application invoked tasks to edge nodes for execution in a timely manner, and (iii) offers naming abstractions and network-based mechanisms to enable (partial or full) reuse of the results of already executed tasks among users, which we call "compute reuse", resulting in lower task completion times and efficient use of edge computing resources. Our simulation and testbed deployment results demonstrate that ICedge can achieve up to 50× lower task completion times leveraging its networkbased compute reuse mechanism compared to cases, where reuse is not available.

2025

This paper discusses leveraging the Named Data Networking (NDN) architecture and Named Function Networking (NFN) to facilitate in-network edge computing. In the NDN context, we consider a the Augmented Reality (AR) use-case-a challenging... more

This paper discusses leveraging the Named Data Networking (NDN) architecture and Named Function Networking (NFN) to facilitate in-network edge computing. In the NDN context, we consider a the Augmented Reality (AR) use-case-a challenging application-to discuss how NDN functionalities can be leveraged for addressing inherent edge computing challenges, such as efficient resource discovery, compute re-use, mobility management, and security. We present several options to tackle the highlighted challenges and where possible provide solutions.

2025, 2018 IEEE International Conference on Edge Computing (EDGE)

This paper discusses leveraging the Named Data Networking (NDN) architecture and Named Function Networking (NFN) to facilitate in-network edge computing. In the NDN context, we consider a the Augmented Reality (AR) use-case-a challenging... more

This paper discusses leveraging the Named Data Networking (NDN) architecture and Named Function Networking (NFN) to facilitate in-network edge computing. In the NDN context, we consider a the Augmented Reality (AR) use-case-a challenging application-to discuss how NDN functionalities can be leveraged for addressing inherent edge computing challenges, such as efficient resource discovery, compute re-use, mobility management, and security. We present several options to tackle the highlighted challenges and where possible provide solutions.

2025, IEEE Network

Delay-sensitive applications have been driving the move away from cloud computing, which cannot meet their low-latency requirements. Edge computing and programmable switches have been among the first steps toward pushing computation... more

Delay-sensitive applications have been driving the move away from cloud computing, which cannot meet their low-latency requirements. Edge computing and programmable switches have been among the first steps toward pushing computation closer to end-users in order to reduce cost, latency, and overall resource utilization. This article presents the "compute-less" paradigm, which builds on top of the well known edge computing paradigm through a set of communication and computation optimization mechanisms (e.g.,, in-network computing, task clustering and aggregation, computation reuse). The main objective of the compute-less paradigm is to reduce the migration of computation and the usage of network and computing resources, while maintaining high Quality of Experience for end-users. We discuss the new perspectives, challenges, limitations, and opportunities of this compute-less paradigm.

2025, IEEE Internet of Things Journal

In today's era of explosion of Internet of Things (IoT) and end-user devices and their data volume, emanating at the network's edge, the network should be more in-tune with meeting the needs of these demanding edge computing applications.... more

In today's era of explosion of Internet of Things (IoT) and end-user devices and their data volume, emanating at the network's edge, the network should be more in-tune with meeting the needs of these demanding edge computing applications. To this end, we design and prototype Information-Centric edge (ICedge), a general-purpose networking framework that streamlines service invocation and improves reuse of redundant computation at the edge. ICedge runs on top of Named-Data Networking, a realization of the Information-Centric Networking vision, and handles the "low-level" network communication on behalf of applications. ICedge features a fully distributed design that: (i) enables users to get seamlessly on-boarded onto an edge network, (ii) delivers application invoked tasks to edge nodes for execution in a timely manner, and (iii) offers naming abstractions and network-based mechanisms to enable (partial or full) reuse of the results of already executed tasks among users, which we call "compute reuse", resulting in lower task completion times and efficient use of edge computing resources. Our simulation and testbed deployment results demonstrate that ICedge can achieve up to 50× lower task completion times leveraging its networkbased compute reuse mechanism compared to cases, where reuse is not available.

2025, Deep Science Publishing

This study explores the use of big data analytics for real-time patient monitoring and risk prediction. By processing vast, diverse health data streams, the system enables continuous tracking of patient conditions and early identification... more

This study explores the use of big data analytics for real-time patient monitoring and risk prediction. By processing vast, diverse health data streams, the system enables continuous tracking of patient conditions and early identification of potential risks, thereby supporting timely interventions and improving overall clinical outcomes and patient safety.

2025, Deep Science Publishing

Real-time communication and collaboration applications have transformed the way we interact and can be seen as foundational technologies in knowledge sharing and remote work. The applications are built upon real-time media processing and... more

Real-time communication and collaboration applications have transformed the way we interact and can be seen as foundational technologies in knowledge sharing and remote work. The applications are built upon real-time media processing and networked systems, and hence are typically deployed using cloud infrastructures. With the ubiquity of deployment, these approaches may return poor quality to sensitive end hosts due to high volume of requests and usage spikes, high network latency, and packet loss. As such, demand has always existed for more effective communication and collaboration systems across geographically dispersed members. The rise of edge computing has stimulated research interest in enabling low-latency flows. In the context of advanced networks, integrated scope can greatly enhance the efficiency of real-time communication and collaboration. The integrated scope can be supported by different network segments, such as edge cloud and optimized network slicing. Combining network intelligence with flexible mobile edge computing, we enable end users to fully enjoy ultra-high-quality real-time media services. Ensuring end host service deployment involves various and difficult challenges. With system management methods, engineering "slim" end-user devices can be activated and fully integrated, offering long-lasting and high-quality connectivity management. Moreover, the new end-user system can also improve both the economy and performance.

2025, Deep Science Publishing

This chapter is intended to present and discuss issues in the context of the digitalization process of energy-consuming systems, which is advancing at both the utility-customer and facility subsystem levels. The focus of this research... more

This chapter is intended to present and discuss issues in the context of the digitalization process of energy-consuming systems, which is advancing at both the utility-customer and facility subsystem levels. The focus of this research includes the area of real-time insights and predictions as they relate to energy management and optimization purposes. These are the key research foci of the project, which concern technology, data, and information. This research aims to explore and assess some of the opportunities and practical challenges associated with digitalized interconnected environments, especially when multiple such virtual environments are interconnected. The end goal of research challenging these areas should provide energy management, operations, and maintenance with the possibility of leveraging the analysis results associated with an ever-increasing access to contextual data. The environments we discuss use interconnected technology that sits at increasing distances from the physical environment but are ultimately linked to it. Consequently, the insights and recommendations are based not just on commonalities in the connected environments, but also on the fact that interconnected information and data can be exchanged in close to real-time. In turn, this rapid and frequent exchange of data can provide already successful recommendations and predictions. In other words, the interconnected digital environments provide, over and above currently existing approaches, a near real-time approximation to the analyses that identify as possible energy efficiency or optimal set Deep Science Publishing

2025, Broadband Communications Networks - Recent Advances and Lessons from Practice

In a transition to automated digital management of broadband networks, communication service providers must look for new metrics to monitor these networks. Complete metrics frameworks are already emerging whereas majority of the new... more

In a transition to automated digital management of broadband networks, communication service providers must look for new metrics to monitor these networks. Complete metrics frameworks are already emerging whereas majority of the new metrics are being proposed in technical papers. Considering common metrics for broadband networks and related technologies, this chapter offers insights into what metrics are available, and also suggests active areas of research. The broadband networks being a key component of the digital ecosystems are also an enabler to many other digital technologies and services. Reviewing first the metrics for computing systems, websites and digital platforms, the chapter focus then shifts to the most important technical and business metrics which are used for broadband networks. The demand-side and supply-side metrics including the key metrics of broadband speed and broadband availability are touched on. After outlining the broadband metrics which have been standardized and the metrics for measuring Internet traffic, the most commonly used metrics for broadband networks are surveyed in five categories: energy and power metrics, quality of service, quality of experience, security metrics, and robustness and resilience metrics. The chapter concludes with a discussion on machine learning, big data and the associated metrics.

2025, IEEE ICCCNT

The rise in Internet of Things (IoT) devices has led to the creation of sophisticated applications that demand various resources in real time to support a wide range of IoT services. Leveraging edge computing (EC) infrastructure, these... more

The rise in Internet of Things (IoT) devices has led to the creation of sophisticated applications that demand various resources in real time to support a wide range of IoT services. Leveraging edge computing (EC) infrastructure, these services can be effectively placed on edge nodes (ENs). However, due to the limited computational resources of ENs, it becomes challenging to manage a large number of services while maintaining the system's quality of service (QoS) and quality of experience (QoE). This paper introduces a quantum-inspired differential evolution method (QIDE-IoTSP) designed to optimize the placement of IoT services within EC networks. The primary objectives of QIDE-IoTSP are to maximize throughput, ensure optimal load balance, and minimize computation time. A quantum vector (QV) is utilized to develop a robust solution for the optimal deployment of IoT services in EC networks to achieve this. The effectiveness of each solution is evaluated using a formulated fitness function. Simulation results demonstrate that QIDE-IoTSP surpasses other metaheuristic techniques in terms of throughput, computation latency, and load balancing.

2025

Emergency situations are unfortunately part of our lives. Today's smart computing allow us handle such situations and fulfill our requirements more efficiently and effectively. This paper presents architecture to handle various kinds of... more

Emergency situations are unfortunately part of our lives. Today's smart computing allow us handle such situations and fulfill our requirements more efficiently and effectively. This paper presents architecture to handle various kinds of emergency situations more efficiently by allowing the user (victim or witness) easy and quick way to alert the concerned department(s) with just a single button press. The emergency related information is then uploaded automatically to the mobile cloud, allowing further analysis and improvement in safety of people.

2025, Expert Systems With Applications

With the advancement of the 5G networks, edge computing (EC) assisted Internet of Things (IoT) based applications demand real-time computation and high-volume data-intensive services. Due to the heterogeneity and limited resources of the... more

With the advancement of the 5G networks, edge computing (EC) assisted Internet of Things (IoT) based applications demand real-time computation and high-volume data-intensive services. Due to the heterogeneity and limited resources of the edge nodes (ENs), and dynamic resource demand of the IoT applications, it is challenging to place the IoT services into the available ENs by ensuring performance measurements on quality of services (QoS). In this paper, a novel quantum-inspired particle swarm optimization-based service placement (QPSO-SP) algorithm is proposed for EC environment. The QPSO-SP is intended to achieve desired service placement while optimizing throughput, energy consumption, delay, and computation load of the system. Quantum particle (QP) is designed to represent a complete solution for IoT service placement in an EC environment. Decoding of the QP is done by using a novel double-hashing technique. The fitness function uses throughput, delay, energy consumption, and load balancing parameters. Extensive simulation is performed and comparison is done with the standard existing algorithms. The parametric study, Taguchi method is conducted. The statistical analysis, ANOVA, followed by Friedman test is also done. The simulation results indicate that the proposed QPSO-SP outperforms existing works in terms of energy consumption, delay, throughput, and load balancing.

2025, World Journal Of Advanced Research and Reviews

The rapid growth of Fog Computing has brought a paradigm shift in data processing and communication, presenting various benefits such as reduced latency, efficient data processing, enhanced scalability and the ability to operate... more

The rapid growth of Fog Computing has brought a paradigm shift in data processing and communication, presenting various benefits such as reduced latency, efficient data processing, enhanced scalability and the ability to operate effectively in resource-constrained environments. However, the technology introduced complex privacy and security issues. This paper conducted a thorough exploration of privacy and security issues associated with fog-to-fog (F2F) communication within the broader framework of fog computing. It initiated by providing a background of fog computing, it's architecture and the core characteristics of fog computing. This survey aimed to discuss the state-of-theart of privacy and security concerns in fog-to-fog communication. The survey also proposed the areas of future research to equip researchers, practitioners, policy makers and the decision makers with solid knowledge, offering guidance in navigating the complex landscape of privacy and security issues in fog-to-fog (F2F) communication. The survey also aimed to discuss the existing privacy and security research gaps in fog-to-fog (F2F) communication. The findings of this review underscore privacy and security issues in F2F communication, providing valuable insights into recommended countermeasures to strengthen the overall security framework.

2025, IEEE conference

Wireless Sensor Networks (WSNs) find extensive applications in environmental monitoring, healthcare, and smart cities. Energy efficiency, however, continues to be a significant challenge with the limited lifetime of sensor node batteries.... more

Wireless Sensor Networks (WSNs) find extensive applications in environmental monitoring, healthcare, and smart cities. Energy efficiency, however, continues to be a significant challenge with the limited lifetime of sensor node batteries. The conventional heuristic-based cluster head (CH) selection techniques tend not to adapt dynamically to network changes, resulting in inefficient energy utilization and decreased network lifetime. This research discusses Machine Learning (ML)-oriented CH selection approaches to improve energy efficiency, network lifetime, and data fusion. Using supervised and unsupervised learning methods, ML algorithms are able to learn optimal CHs dynamically based on residual energy, network structure, and data traffic. The comparative analysis show that ML-based CH selection enhances network stability by 95% and lowers energy consumption by 50% compared to traditional methods. The research points to the promise of ML in WSN performance optimization and opens the doors to intelligent, adaptive clustering technologies.

2025, IEEE Transactions on Parallel and Distributed Systems

Fog computing platforms became essential for deploying low-latency applications at the network's edge. However, placing and managing time-critical applications over a Fog infrastructure with many heterogeneous and resource-constrained... more

Fog computing platforms became essential for deploying low-latency applications at the network's edge. However, placing and managing time-critical applications over a Fog infrastructure with many heterogeneous and resource-constrained devices over a dynamic network is challenging. This paper proposes an incremental multilayer resource-aware partitioning (M-RAP) method that minimizes resource wastage and maximizes service placement and deadline satisfaction in a dynamic Fog with many application requests. M-RAP represents the heterogeneous Fog resources as a multilayer graph, partitions it based on the network structure and resource types, and constantly updates it upon dynamic changes in the underlying Fog infrastructure. Finally, it identifies the device partitions for placing the application services according to their resource requirements, which must overlap in the same low-latency network partition. We evaluated M-RAP through extensive simulation and two applications executed on a real testbed. The results show that M-RAP can place 1.6 times as many services, satisfy deadlines for 43% more applications, lower their response time by up to 58%, and reduce resource wastage by up to 54% compared to three state-of-the-art methods.

2025, Satyanarayana Ballamudi

IoT platforms act as technological frameworks that provide the foundation for connecting and managing Internet of Things devices and applications. These platforms offer a wide range of services and tools that streamline the development,... more

IoT platforms act as technological frameworks that provide the foundation for connecting and managing Internet of Things devices and applications. These platforms offer a wide range of services and tools that streamline the development, deployment, and operation of IoT solutions. They enable seamless integration and communication between IoT devices, facilitate data collection and analysis, provide device management capabilities, and facilitate the creation of IoT applications. By offering a centralized and scalable infrastructure, IoT platforms play a crucial role in empowering organizations and developers to fully harness the potential of the IoT, leading to the creation of innovative and efficient IoT solutions. Research dedicated to "the selection of IoT platforms plays a crucial role in the industry". "With the increasing number of IoT applications", the importance of making the right platform choice becomes critical for successful implementation. The research provides valuable insights that aid organizations and developers "in making informed decisions when selecting an IoT platform that aligns with their specific requirements". By leveraging this knowledge, stakeholders can ensure that they choose the most suitable platform to meet their needs effectively. "The objective of this research paper is to tackle the evaluation of IoT platforms" by approaching it as a problem of multicriteria decision making (MCDM) due to its complexity involving multiple factors. To accomplish this goal, the research develops a system for creating evaluation criteria, facilitating the comprehensive assessment of IoT platforms. In the ranking based on the COPRAS method, Google Cloud IoT emerged as the top-ranked platform, demonstrating its superior performance and highest utility. Amazon AWS IoT Core closely followed in the second position, showcasing its strong performance and positive attributes. Microsoft Azure IoT Hub secured the third rank, highlighting its competitive performance compared to other platforms. ThingWorx obtained the fourth rank, indicating its relatively good performance according to the COPRAS method. Particle ranked fifth, positioning its performance in the middle range among the evaluated platforms. Oracle IoT obtained the sixth rank, suggesting its performance was relatively lower compared to other platforms. IBM Watson IoT received the seventh rank, indicating its relatively weaker performance in the evaluation. These rankings offer valuable insights for decisionmaking and platform selection, enabling stakeholders to evaluate the overall performance and relative positions of the IoT platforms based on the COPRAS method. .

2025, 2016 28th International Teletraffic Congress (ITC 28)

In order to overcome the cloud service performance limits, the INPUT Project aims to go beyond the typical IaaS-based service models by moving computing and storage capabilities from the datacenters to the edge network, and consequently... more

In order to overcome the cloud service performance limits, the INPUT Project aims to go beyond the typical IaaS-based service models by moving computing and storage capabilities from the datacenters to the edge network, and consequently moving cloud services closer to the end users. This approach, which is compatible with the concept of fog computing, will exploit Network Functions Virtualization (NFV) and Software Defined Networking (SDN) to support personal cloud services in a more scalable and sustainable way and with innovative added-value capabilities. This paper presents OpenVolcano, the open-source software platform under development in the INPUT Project, which will realize the fog computing paradigm by exploiting in-network programmability capabilities for off-loading, virtualization and monitoring.

2025, International Journal of Fog Computing

Big data analytics with the cloud computing are one of the emerging area for processing and analytics. Fog computing is the paradigm where fog devices help to reduce latency and increase throughput for assisting at the edge of the client.... more

Big data analytics with the cloud computing are one of the emerging area for processing and analytics. Fog computing is the paradigm where fog devices help to reduce latency and increase throughput for assisting at the edge of the client. This article discusses the emergence of fog computing for mining analytics in big data from geospatial and medical health applications. This article proposes and develops a fog computing-based framework, i.e. FogLearn. This is for the application of K-means clustering in Ganga River Basin Management and real-world feature data for detecting diabetes patients suffering from diabetes mellitus. The proposed architecture employs machine learning on a deep learning framework for the analysis of pathological feature data that obtained from smart watches worn by the patients with diabetes and geographical parameters of River Ganga basin geospatial database. The results show that fog computing holds an immense promise for the analysis of medical and geospa...

2025, ASEC 2022

This study proposes the adoption of the IoT technology for the home monitoring of the health status of frail patients. Such a solution is thought to be part of the forthcoming Italian COT; the latter is an organizational model which is... more

This study proposes the adoption of the IoT technology for the home monitoring of the health status of frail patients. Such a solution is thought to be part of the forthcoming Italian COT; the latter is an organizational model which is devoted to integrate the current national healthcare network. The prevalent deployment model of IoT systems is the Cloud, which offers powerful services and unlimited storage/computing capacity on-demand; unfortunately, connecting smart devices to the Cloud poses severe issues. First of all, connected devices create large volumes of data, which will drive inevitably to performance and network congestion challenges. Secondly, there are security, bandwidth, and reliability concerns that make the Cloud-only solution not suitable for all the potential real-world applications. The Fog computing paradigm has been introduced to bridge the gap between the Cloud and IoT devices. This paper gives a twofold contribution: (a) a Cloud-Fog architecture is proposed using a three-tiers solution where the Fog computing layer constitutes the middle tier; (b) simulations have been carried out in order to compare Cloud-Fog computing as an alternative to the Cloud-only solution. The experimental results demonstrate a remarkable degradation of the latency in the first solution with respect to the second one. The measured benefit indicates that the best way to implement the COTs consists in placing inside them the fog layer mentioned above. The iFogSim open-source toolkit has been used to carry out the experiments.

2025, Proceedings of the 3rd International Conference on Future Networks and Distributed Systems

The Internet of Things (IoT) is advancing and the adoption of internet-connected devices in everyday use is constantly growing. This increase not only affects the traffic from other sources in the network, but also the communication... more

The Internet of Things (IoT) is advancing and the adoption of internet-connected devices in everyday use is constantly growing. This increase not only affects the traffic from other sources in the network, but also the communication quality requirements, like Quality of Service (QoS), for the IoT devices and applications. With the rise of dynamic network management and dynamic network programming technologies like Software-Defined Networking (SDN), traffic management and communication quality requirements can be tailored to fit niche use cases and characteristics. We propose a publish/subscribe QoS-aware framework (PSIoT-SDN) that orchestrates IoT traffic and mediates the allocation of network resources between IoT data aggregators and pub/sub consumers. The PSIoT framework allows edge-level QoS control using the features of publish/ subscribe orchestrator at IoT aggregators and, in addition, allows network-level QoS control by incorporating SDN features coupled with a bandwidth allocation model for networkwide IoT traffic management. The integration of the framework with SDN allows it to dynamically react to bandwidth sharing enabled by the SDN controller, resulting in better bandwidth distribution and higher link utilization for IoT traffic.

2025

This "ILLUSTRATED TECHNICAL PAPER" presents the slides describing the contents of the paper "A Pub/Sub SDN-Integrated Framework for IoT Traffic Orchestration". The talk was presented at the 3rd International Conference... more

This "ILLUSTRATED TECHNICAL PAPER" presents the slides describing the contents of the paper "A Pub/Sub SDN-Integrated Framework for IoT Traffic Orchestration". The talk was presented at the 3rd International Conference on Future Networks and Distributed<strong> Systems - ICFNDS 2019</strong>, 1 - 2 July 2019 at Paris, France. The "illustrated technical paper format" is intended to complement, enrich and subsidize the technical paper content and contains slides, complementary text and additional and/or focused bibliographic references.

2025, CRC Press eBooks

Cyber security is one of the major concerns for the peace and tranquillity of citizens. To provide the safe and secure cyber environment, a model is designed on fog computing. A realistic cyber security dataset of Internet of Things... more

Cyber security is one of the major concerns for the peace and tranquillity of
citizens. To provide the safe and secure cyber environment, a model is designed on fog
computing. A realistic cyber security dataset of Internet of Things (IoT) devices and Industrial Internet of Things (IIoT) applications, called edge-IIoTset, are used to design a Bayesian learning model. The proposed model is designed on a fog layer based on hybrid computing technology. IoT data generated from devices are collected for analysis. While doing analysis, regression analysis is done for Transmission Control Protocol (TCP), which is under Denial of Service (DOS) and Distributed Denial of Service (DDOS) attack, injection attacks and malware attacks. The regression analysis model is designed to check the probability of possible attacks and verify the technological capability of devices. The 80% dataset is used as training data and 20% data is used for testing data. After the results are verified with regression analysis. It is required to evaluate more than one variable for the prediction of attacks.

2025

In day to day routine, every person in this world generates lots of data. Many methodologies are used for managing these data and developing a fully functional model. With the incorporation of new paradigms like artificial intelligence,... more

In day to day routine, every person in this world generates lots of data. Many methodologies are used for managing these data and developing a fully functional model. With the incorporation of new paradigms like artificial intelligence, cloud
computing, fog computing, and IoT, smart cities can enhance the different standard of living for the residents. Different industries have started using these technologies to establish a more dynamic environment for applications. In this paper, the framework of fog computing is discussed which was used for some applications. The discussed framework concentrates on data integrity by using a secret sharing layer and communication between the fog-cloud layer. Along with this, different constraints are discussed, which emphasize the resource allocation process with optimization(HBO). The obtained results are verified by statistical analysis.

2025, Wireless Personal Communications

The cloud computing paradigm offers several services to handle a large amount of data. These services include data storage, exploration, and analysis. Due to the increase in the Internet of Thing devices, traditional computing systems are... more

The cloud computing paradigm offers several services to handle a large amount of data. These services include data storage, exploration, and analysis. Due to the increase in the Internet of Thing devices, traditional computing systems are shifting to fog computing. Resource utilization is a complex task that is often compromised due to the non-availability of required resources on the fog layer. The dynamic nature of fog layer resources depends on the users’ requirements for resources. Due to limited resources available at the fog layer, a resource utilization policy is required to ensure efficient resource usability. Therefore, it is significant to allocate the resources and scheduling tasks on the fog layer. In this paper, a soft real-time based resource utilization framework has been proposed. This framework offers a zero hour policy that caters to resource utilization. The proposed policy is evaluated on the iFogsim simulator and compared with the existing approach. The experimental results demonstrated that zero hour policy effectively minimizes execution time and loop delay of applications.

2025, Springer eBooks

The fog computing paradigm is an amalgamation of traditional technologies to assist massive data-generated IoT environments. A derivative of cloud computing that delivers cloud-like services at the edge of the network can be considered.... more

The fog computing paradigm is an amalgamation of traditional technologies to assist massive data-generated IoT environments. A derivative of cloud computing that delivers cloud-like services at the edge of the network can be considered. Fog computing helps to address the most significant issues of latency and network consumption. Hence, the task scheduling strategies can be implemented to obtain effective resource utilization. In this paper, a multi-fold task framework and multi-fold task clustering are represented for all the tasks. The multi-fold task framework is representing layered architecture and its functionalities. However, the expectation maximization (EM) clustering technique is used to create task clustering in three categories. After categorizing the task, heap-based optimizer (HBO) algorithm is implemented to schedule the tasks. This paper aims to optimize resource utilization by efficiently scheduling the tasks and maintaining all available fog resources. The simulation process is executed using iFogSim.

2025, 2023 IEEE 12th International Conference on Communication Systems and Network Technologies (CSNT)

Data processing occurs on the central server in cloud computing. Data transfers from the node to the cloud take a long time. Fog computing has therefore been presented as a solution to these problems. The fog computing nodes handles data... more

Data processing occurs on the central server in cloud computing. Data transfers from the node to the cloud take a long time. Fog computing has therefore been presented as a solution to these problems. The fog computing nodes handles data processing on network end only. On the other hand, if it requires further processing, it is sent to the central server to complete the task. The processing time is cut down, and the data is more effectively used. In geologically separated places where connectivity may be uneven, fog is generally beneficial. It has become more common in recent years to use ML to enhance fog computing applications and provide fog services, including efficient resource management, security, lowering latency and energy consumption, and traffic modelling. This article proposes a machine learning approach for smart waste management application of fog computing. The proposed approach is based on CNN approach that classifies provided data into different categories that can be used in future for processing waste management data. The proposed approach provides 95.83% accuracy.

2025, Journal of emerging technologies and innovative research

The IoT(internet of things) devices are used to speed up the research on different ways to make a cloud-based system and its services more scalable. The framework of fog computing acts as a significant and novel concept. The idea behind... more

The IoT(internet of things) devices are used to speed up the research on different ways to make a cloud-based system and its services more scalable. The framework of fog computing acts as a significant and novel concept. The idea behind the approach is to change the traditional approach by performing the decentralization of the cloud. Fog computing helps to decrease communication bandwidth between the sensors and data centers. Fog computing methodology is beneficial for smart building designs so that this system must integrate to design more intelligent facilities. The architecture of fog computing with its different applications, security issues, and communication protocols are discussed in this paper.

2025, 2018 IEEE Intl Conf on Parallel & Distributed Processing with Applications, Ubiquitous Computing & Communications, Big Data & Cloud Computing, Social Computing & Networking, Sustainable Computing & Communications (ISPA/IUCC/BDCloud/SocialCom/SustainCom)

Information Technology industry has competitiveness on the basis of technological environment. In this environment, the use of cloud services has been increasing to provide high quality services and fast delivery of products to cloud... more

Information Technology industry has competitiveness on the basis of technological environment. In this environment, the use of cloud services has been increasing to provide high quality services and fast delivery of products to cloud users. But still some issues are unresolved especially, related to latency between cloud data center and end user. Fog computing is used to support increasing demand of IT service with the collaboration of cloud computing. It provides computational and storage services of cloud proximate to IoT devices. Fog computing is enhancement of the cloud-based network and computing services. This paper discusses the concept, architecture of fog computing and implemented application. It also highlights about resource provisioning techniques to identify over utilization of fog nodes. Along with the resource utilization, different scheduling terminologies have also been discussed on various parameters. The motive of this survey is to understand the application of fog computing to improve the existing smart healthcare system.

2025, International Journal of Information Technology and Computer Engineering

In the modern financial landscape, the exponential growth of data generated by digital transactions, IoT devices, and online banking has created significant challenges for traditional data processing methods. This paper proposes a... more

In the modern financial landscape, the exponential growth of data generated by digital transactions, IoT devices, and online banking has created significant challenges for traditional data processing methods. This paper proposes a cloud-based solution for financial data processing, utilizing advanced data compression, exploratory data analysis (EDA), and cloud data management. The adoption of Optimized Row Columnar (ORC) storage format reduces storage requirements by up to 60%, while cloud platforms like AWS, GCP, and Azure provide scalable and cost-effective resources. Real-time analytics powered by AI-driven tools enable better decision-making for financial institutions. Despite the benefits, challenges related to data privacy, latency, and system integration persist, which are mitigated through hybrid cloud models and enhanced encryption techniques.

2025, Asian Journal of Applied Science and Technology (AJAST)

The AI-Based Advertisement Optimization and Performance Analytics program aims to revolutionize digital marketing by way of real-time automation and optimization of advertising campaigns using AI. The architecture proposes in making use... more

The AI-Based Advertisement Optimization and Performance Analytics program aims to revolutionize digital marketing by way of real-time automation and optimization of advertising campaigns using AI. The architecture proposes in making use of advanced machine learning algorithms and data analytics to analyze massive amounts of ad performance data, theirs including and not limited to click-through rates, conversion rates, audience demographics, engagement rates, and temporal patterns, and develop key performance indicators or useful insights on the paved way of real-time automated marketing and optimization of advertising campaigns through AI. The system's other operations engage predictive modeling approaches to provide ad placements, formats, and budgets, recommending them dynamically while maximizing returns and minimizing costs per click, complemented also by audience sentiment estimation involving reviews and feedback input via techniques like natural language processing(NLP) for context-relevant advertising. Reinforcement learning agents attach and hook advertisements continuously trained from fresh data and change strategies accordingly to keep ads flexible and performance-driven. The solution provides stakeholders with interactive dashboards through which they can view and appreciate real-time ad performance across different platforms like Facebook, Instagram, and Google Ads, while analyzing and visualizing each campaign's reach and impact. The research really is a testament to AI being poised to radically change digital advertising by making it more intelligent, effective, and data-driven. It certainly raises the bar for client targeting precision, hence promoting data-based, well-informed marketing decisions and laying a well-considered foundation for independent management of ad campaigns.

2025, International Journal of Research in Engineering Technology (IJORET)

Efficient cloud resource management is crucial for optimizing performance, reducing costs, and ensuring scalability in dynamic cloud environments. This paper proposes a novel framework that integrates ARIMA (Auto Regressive Integrated... more

Efficient cloud resource management is crucial for optimizing performance, reducing costs, and ensuring scalability in dynamic cloud environments. This paper proposes a novel framework that integrates ARIMA (Auto Regressive Integrated Moving Average) for forecasting resource demand, Reinforcement Learning (RL) for dynamic resource scaling, and Cuckoo Search (CS) for hyperparameter optimization. The framework leverages the Cloud Computing Performance Metrics Dataset to predict future resource needs and optimize the allocation of cloud resources. The ARIMA model is used to forecast CPU, memory, and network utilization, which are fed into the RL agent to make real-time resource scaling decisions. Cuckoo Search fine-tunes the parameters of both the ARIMA and RL models to enhance their performance. Experimental results demonstrate that the proposed framework achieves a 99% accuracy, 98% resource utilization efficiency, 100 ms latency, and a cost efficiency value of 1.0. These results significantly outperform traditional methods such as Random Forest (RF) and Bi-LSTM, which show accuracy rates of 88% and 80%, respectively. This framework offers a comprehensive and efficient solution for cloud resource optimization, providing both high performance and cost savings. The combination of forecasting and real-time decision-making distinguishes this approach, making it an effective tool for modern cloud environments.

2025

In Today's world technologies such as Internet of Things, Cloud Computing as well as Fog Computing are growing at an exponential rate which depend upon each other directly or indirectly. The Internet of Things can be described as a... more

In Today's world technologies such as Internet of Things, Cloud Computing as well as Fog Computing are growing at an exponential rate which depend upon each other directly or indirectly. The Internet of Things can be described as a network of substantial matter such as cars, washing machines, refrigerator which can interact with each other through internet. Billions of devices will be IoT enabled in near future and generate enormous amount of data but IoT devices has some limitations like storage capabilities, processing capabilities and utilization of resources which can only be handled by integrating it with cloud technology. Cloud model provide environment in which software, Infrastructure, sharable pool of configurable resources, virtual environment, sensors, hardware and database is provided as a utility for IoT devices and users. In cloud computing paradigm some limitations exist for example distance of the data source from multi-hop, geological unified structure, latency, heterogeneity and many more. To address such limitations, Fog computing approach can be used to bring computing assets nearer to IoT devices. Fog computing is an enhancement of the cloud-based Network and computing services. It provides computational and storage services of cloud proximate to IoT devices. This paper provides an overview regarding the cloud computing uses in IoT devices and issues or problems that occur during integration. Handling of problems that occurs during integration of cloud with IoT can be done through fog computing. The purposeofthis survey is to understand the concept of fog computing toimprove the existing system of Integration of Cloud with IoT

2025, International journal of modern electronics and communication engineering

This paper introduces a Digital Twin-Based Predictive Analytics approach that combines digital twins, predictive modeling, and real-time simulation to improve software performance and dependability. By simulating real-world situations,... more

This paper introduces a Digital Twin-Based Predictive Analytics approach that combines digital twins, predictive modeling, and real-time simulation to improve software performance and dependability. By simulating real-world situations, the method predicts software faults, improves fault tolerance, and guarantees effective system operation. Whereas failure prediction uses a probability model to estimate possible breakdowns, reliability prediction uses an exponential reliability function. A comparison analysis is used to evaluate performance, showing that the maximum accuracy (95.67%), failure detection rate (92.7%), and system performance efficiency (0.92 output/time) are obtained by integrating all three components. The suggested approach reduces execution time while enhancing software resilience and adaptability in comparison to conventional techniques. The results of the ablation study further verify the role of each component, showing that digital twins, simulation, and predictive modeling all work together to maximize execution performance and dependability. The Digital Twin-Based Predictive Analytics model performs better than other predictive analytics methods, as seen by its lowest error rate (0.05%), greatest dependability index (0.94), and fastest processing time (2.4s). To promote proactive decision-making and scalable, reliable software solutions, our findings highlight the importance of real-time adaptation in software administration.

2025, International Journal of Contemporary Research in Multidisciplinary

As cyber threats evolved, the necessity of secure and effective sharing of threat intelligence in cloud environments has developed ever more urgently. Centralized methods are confronted with serious limitations, such as risks of data... more

As cyber threats evolved, the necessity of secure and effective sharing of threat intelligence in cloud environments has developed ever more urgently. Centralized methods are confronted with serious limitations, such as risks of data integrity, transparency issues, and scalability limitations. It introduces the Enhancing Security Using Blockchain paper, which has been cited for overcoming the aforementioned problems. This system leverages blockchain technology to achieve decentralized threat intelligence sharing, as well as to provide tampering evidence and transparency. The proposed method is to secure the data on an Ethereum blockchain while utilizing ABE for fine-grained access control to ensure that only permissible persons can access sensitive data. Smart contracts also automate the verification process and transactions to be more secure and efficient. Machine learning methods such as logistic regression, random forest, and CNNs are used to capture cyber threat patterns and optimize risk detection. Experimental verification indicates that the ensemble model is 92% accurate, surpassing traditional security measures in detecting cyber-attacks and maintaining data integrity. Moreover, trust between parties is promoted by blockchain, preventing data manipulation and enabling transaction processes with low latency. These advantages notwithstanding, there are future studies in computational overhead, government regulation-compliance, and existing cloud infrastructure integration as their scope. The emphasis of this research will be on how to improve cybersecurity through the sharing of threat intelligence using blockchain with secure, transparent, and scalable methods of managing such threats in cloud environments. The future work will focus on blockchain storage optimization, improving computational efficiency accompanied by better consensus schemes for scalability and acceptance.

2025, International Journal of Information Technology & Computer Engineering

Background Integrating ethnographic insights with big data analytics improves healthcare systems research, especially in cardiology. This interdisciplinary approach tackles complicated issues in patient care, resource allocation, and... more

Background Integrating ethnographic insights with big data analytics improves healthcare systems research, especially in cardiology. This interdisciplinary approach tackles complicated issues in patient care, resource allocation, and economic evaluation, providing a more complete picture of healthcare delivery. Methods This study adopts a hybrid approach that combines qualitative ethnographic methodologies with quantitative big data analytics. Ethnographic research documents patientprovider interactions, whereas big data analysis examines massive amounts of health data to detect trends and forecast results. Objectives The primary goals include contextualizing big data insights using ethnographic methods, assessing the cost-effectiveness of cardiac procedures, improving decision-making by combining qualitative and quantitative approaches, and improving patient care by investigating systemic healthcare issues. Results The Ethnographic Health Systems Research (EHSR) approach represented an advance over existing methodologies in terms of data accuracy, prediction accuracy, cost-effectiveness, patient satisfaction, and scalability leading to substantive healthcare delivery improvements within cardiovascular health.