Data Flow Diagram Research Papers (original) (raw)

2025, Proceedings of the IEEE

2025, Proceedings of the 1st ACM symposium on Cloud computing

We present a parallel data processor centered around a programming model of so called Parallelization Contracts (PACTs) and the scalable parallel execution engine Nephele . The PACT programming model is a generalization of the well-known... more

We present a parallel data processor centered around a programming model of so called Parallelization Contracts (PACTs) and the scalable parallel execution engine Nephele . The PACT programming model is a generalization of the well-known map/reduce programming model, extending it with further second-order functions, as well as with Output Contracts that give guarantees about the behavior of a function. We describe methods to transform a PACT program into a data flow for Nephele, which executes its sequential building blocks in parallel and deals with communication, synchronization and fault tolerance. Our definition of PACTs allows to apply several types of optimizations on the data flow during the transformation. The system as a whole is designed to be as generic as (and compatible to) map/reduce systems, while overcoming several of their major weaknesses: 1) The functions map and reduce alone are not sufficient to express many data processing tasks both naturally and efficiently. 2) Map/reduce ties a program to a single fixed execution strategy, which is robust but highly suboptimal for many tasks. 3) Map/reduce makes no assumptions about the behavior of the functions. Hence, it offers only very limited optimization opportunities. With a set of examples and experiments, we illustrate how our system is able to naturally represent and efficiently execute several tasks that do not fit the map/reduce model well.

2025, Space Science Reviews

The UV/Optical Telescope (UVOT) is one of three instruments flying aboard the Swift Gamma-ray Observatory. It is designed to capture the early (~1 minute) UV and optical photons from the afterglow of gamma-ray bursts in the 170-600 nm... more

The UV/Optical Telescope (UVOT) is one of three instruments flying aboard the Swift Gamma-ray Observatory. It is designed to capture the early (~1 minute) UV and optical photons from the afterglow of gamma-ray bursts in the 170-600 nm band as well as long term observations of these afterglows. This is accomplished through the use of UV and optical broadband filters and grisms. The UVOT has a modified Ritchey-Chrétien design with micro-channel plate intensified charged-coupled device detectors that record the arrival time of individual photons and provide subarcsecond positioning of sources. We discuss some of the science to be pursued by the UVOT and the overall design of the instrument.

2025

In this paper we present an integrated system for multimodal 3D tracking of a performer during an artistic event. This system integrates two tracking algorithms: one audio and one optical. DEI-CSC, University of Padova, has developed the... more

In this paper we present an integrated system for multimodal 3D tracking of a performer during an artistic event. This system integrates two tracking algorithms: one audio and one optical. DEI-CSC, University of Padova, has developed the audio tracking algorithm. The optical tracking algorithm has been developed by DIST-InfoMus Lab University of Genova. The two algorithms are integrated using the EyesWeb software platform (www.eyesweb.org), which is responsible of the synchronisation and combination of data flow.

2025, Proceedings of the fourth ACM SIGPLAN symposium on Principles and practice of parallel programming - PPOPP '93

We present a solution to the reaching definitions problem for programs with explicit lexicully specified parallel constructs, such as cobeginicoend orparallel.sections, hothwith and without explicit synchronization operations, such as... more

We present a solution to the reaching definitions problem for programs with explicit lexicully specified parallel constructs, such as cobeginicoend orparallel.sections, hothwith and without explicit synchronization operations, such as Post, Wait or Advance. The reaching definitiona information for sequential programs is used to solve many standard optimization problems. ln parallel programs, th~information can also be used to explicitly direct communication and data ownership. Although work has been done on analyzing parallel programs to detect data races, little work has been done on optimizing such programs. We show how the memory consistency model specified by an explicitly parallel programming language can influence the complexity of the reaching definitions problem. By selecting the "weakest" memory consistency semantics, we can efficiently solve the reaching definitions problem for correct programs,

2025, Anais do 2002 International Telecommunications Symposium

Quality of Service (QoS) provisioning on a per node basis, which assumes that this strategy would provide QoS in the whole domain. Nevertheless, this approach could fail in large domains with multiple flows aggregation and unexpected... more

Quality of Service (QoS) provisioning on a per node basis, which assumes that this strategy would provide QoS in the whole domain. Nevertheless, this approach could fail in large domains with multiple flows aggregation and unexpected input traffic. Therefore, provisioning techniques should be used to avoid unpredicted overloads that result in QoS fluctuations. A proposal using fuzzy controllers to reconfigure DiffServ nodes according to ingress traffic and achieved QoS was presented in [1]. However, it is not easy to specify fuzzy rule bases and membership functions that optimize the controllers performance. Thus, we propose a methodology to choose optimized fuzzy controller parameters using the Wang-Mendel and genetic algorithms. Finally, we evaluate the performance of this methodology by simulation of voice over IP applications in DiffServ domains.

2025, Lecture Notes in Computer Science

We recently introduced Service Clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of autonomic communication services. In this paper, we propose a model that extends Service Clouds to the wireless... more

We recently introduced Service Clouds, a distributed infrastructure designed to facilitate rapid prototyping and deployment of autonomic communication services. In this paper, we propose a model that extends Service Clouds to the wireless edge of the Internet. This model, called Mobile Service Clouds, enables dynamic instantiation, composition, configuration, and reconfiguration of services on an overlay network to support mobile computing. We have implemented a prototype of this model and applied it to the problem of dynamically instantiating and migrating proxy services for mobile hosts. We conducted a case study involving data streaming across a combination of PlanetLab nodes, local proxies, and wireless hosts. Results are presented demonstrating the effectiveness of the prototype in establishing new proxies and migrating their functionality in response to node failures.

2025, Nuclear Instruments and Methods in Physics Research Section B: Beam Interactions with Materials and Atoms

The EURopean Illicit TRAfficing Countermeasures Kit (EURITRACK) Front-End and Data Acquisition System is a compact set of VME boards interfaced with a standard PC. The system is part of a cargo container inspection portal based on the... more

The EURopean Illicit TRAfficing Countermeasures Kit (EURITRACK) Front-End and Data Acquisition System is a compact set of VME boards interfaced with a standard PC. The system is part of a cargo container inspection portal based on the tagged neutrons technique. The front-end processes all detector signals and checks coincidences between any of the 64 pixels of the alpha particle detector and any gamma-ray signals in 22 NaI(Tl) scintillators. The system is capable of handling the data flow at neutron flux up to the portal limiting value of 10 8 neutrons/second. Some typical applications are presented.

2025, Processes: a view from inside

A processual texture lies at the core of phenomena that undergo change. Philosophers have observed the incessant becoming of our world from antiquity to the present day, but an explanation must still be found in terms of contemporary... more

A processual texture lies at the core of phenomena that undergo change. Philosophers have observed the incessant becoming of our world from antiquity to the present day, but an explanation must still be found in terms of contemporary metaphysics. This article strives to bring the focus on the domain of possibles, an under-theorised philosophical issue that dwells in the ontological realm between the possibility and its actual realization.
Sketching a more comprehensive concept of the possible unravelling the thread of its main meanings, the process of change will be not simply clarified from inside, but also understood as an activator of reality. Finally, the discussion will argue for a principle of prudence and responsibility derived from an ontological basis. Key words: possible, logical possibility, ontological possibility, achievability, system thinking, change, augmented realit. Finally, the discussion will argue for a principle of
prudence and responsibility derived from an ontological basis.

2025

Building context-aware mobile healthcare systems has become increasingly important with the emergence of new medical sensor technologies, the fast adoption of advanced mobile systems, and improved quality of care required by today's... more

Building context-aware mobile healthcare systems has become increasingly important with the emergence of new medical sensor technologies, the fast adoption of advanced mobile systems, and improved quality of care required by today's patients. A unique feature of our mobile healthcare system is a distributed processing paradigm whereby a set of bio-signal processing tasks is spread across a heterogeneous network. As well as applying the traditional adaptation methods such as protocol adaptation and data prioritization, the authors investigate the possibility of adaptation based on dynamic task redistribution. In this chapter, the authors propose an adaptation middleware that consists of a task assignment decision mechanism and a task redistribution infrastructure. The decision mechanism represents task

2025, Asilomar Conference on Signals, Systems and Computers

This paper presents an overview of transformations for DSP programs given in form of coarse-grain dataow graphs. The goal is to produce a functionally equivalent data-ow graph with improved characteristics, regarding modeling and/or... more

This paper presents an overview of transformations for DSP programs given in form of coarse-grain dataow graphs. The goal is to produce a functionally equivalent data-ow graph with improved characteristics, regarding modeling and/or implementation of DSP software. Retiming, unfolding, vectorization, clustering as well as node/arc set extensions are discussed. As an example, an application of the presented transformations to the design of a satellite receiver is presented.

2025, Proceedings Seventh International Workshop on Research Issues in Data Engineering. High Performance Database Management for Large-Scale Applications

Most existing workflow management systems (WFMSs) are based on a client/server architecture. This architecture simplifies the overall design but it does not match the distributed nature of workflow applications and imposes severe... more

Most existing workflow management systems (WFMSs) are based on a client/server architecture. This architecture simplifies the overall design but it does not match the distributed nature of workflow applications and imposes severe limitations in terms of scalability and reliability. Moreover, workflow engines are not very sophisticated in terms of data management, forgetting the fact that workflow is, to a great extent, data flow. In this paper, we propose a novel architecture to address the issue of data management in a WFMS. This architecture is based on a fully distributed workflow engine for control flow, plus a set of loosely synchronized replicated databases for data flow. The resulting system offers greater robustness and reliability as well as much better data handling capabilities than existing approaches. To better illustrate this novel architecture and its implications, two commercial systems are employed in this paper: Flow-Mark, as the workflow engine, and the replication capabilities of Lotus Notes, as the support system for distributed data management. The replication id distinguishes a replica from a casual database copy, which has its own different database id.

2025, Sistem Layanan Mandiri Di Kantor Desa Berbasis Web: Analisis dan Perancangan untuk Meningkatkan Efisiensi Pelayanan Publik

Village government as the spearhead of public services faces challenges in providing efficient services to the community. This study aims to analyze and design a web-based self-service system for village offices to improve the efficiency... more

Village government as the spearhead of public services faces challenges in providing efficient services to the community. This study aims to analyze and design a web-based self-service system for village offices to improve the efficiency of administrative services. The research method uses fishbone analysis to identify problems, SWOT analysis to evaluate internal and external factors, and system design using Data Flow Diagrams (DFD) and Entity Relationship Diagrams (ERD). The results of the study show that the manual system that still dominates causes obstacles such as slow letter submission processes and complicated bureaucracy. The designed system allows the community to carry out administrative affairs independently via the web with NIK and fingerprint security features. Implementation of this system can save time, reduce queues, increase transparency, and reduce operational costs despite facing challenges of limited infrastructure and digital literacy.

2025, Collaborative …

This report describes work performed by CDM Technologies Inc. on subcontract to ManTech Advanced Systems International, Inc. (Fairmont, West Virginia), and under sponsorship of the Office of Naval Research (ONR). The principal aim of the... more

This report describes work performed by CDM Technologies Inc. on subcontract to ManTech Advanced Systems International, Inc. (Fairmont, West Virginia), and under sponsorship of the Office of Naval Research (ONR). The principal aim of the SILS (Shipboard Integration of Logistics Systems) project is to provide a decision-support capability for Navy ships that integrates shipboard logistical and tactical systems within a near real-time, automated, computer-based shipboard readiness and situation awareness facility. Specifically, SILS is intended to provide the captain of a ship and his staff with an accurate evaluation of the current condition of the ship, based on the ability of all of its equipment, services and personnel to perform their intended functions. The SILS software system consists of two main subsystems, namely: the SILS IE (Interface Engine) subsystem for information interchange with heterogeneous external applications, developed by ManTech Advanced Systems International; and, the SILS MRAT (Mission Readiness Analysis Toolkit) subsystem for intelligent decision-support with collaborative software agents, developed by CDM Technologies. This report is focused specifically on the technical aspects of the SILS MRAT subsystem. The automated reasoning capabilities of SILS MRAT are supported by a knowledge management architecture that is based on information-centric principles. Such an 1 brought to you by CORE View metadata, citation and similar papers at core.ac.uk

2025, SISTEM BASISDATA SENPAI APPLICATION

Abstrak The development of digital technology has driven transformation in various sectors, including education and training. Digital coaching platforms have become innovative solutions for connecting competition participants with... more

Abstrak The development of digital technology has driven transformation in various sectors, including education and training. Digital coaching platforms have become innovative solutions for connecting competition participants with professional mentors. This research aims to design a digital coaching marketplace information system called SENPAI (Smart Education and Networking Platform for Academic Improvement) using the Database System Development Lifecycle (DBSDLC) methodology. This system connects academic and non-academic competition participants with experienced coaches through an integrated digital platform. The research uses a qualitative approach with data collection techniques through interviews, observations, and literature studies. The DBSDLC stages applied include database planning, requirements analysis, conceptual design, and logical design. The research results produce an Entity Relationship Diagram (ERD) model with 13 main entities, 25 key attributes, and 18 interconnected relationships. The system supports multi-role management (customer, coach, admin), dynamic booking systems, payment gateway integration, and additional on-site coaching services. Model validation was performed through normalization to third normal form (3NF) and transaction pathway analysis. System implementation is projected to reduce coach search time by up to 70% and increase booking process efficiency by up to 85%. The resulting database model meets the functional and non-functional requirements of the system and can serve as a foundation for developing scalable and reliable digital coaching platforms.

2025

This document provides a preliminary order of magnitude estimate of the number of Very Lights Jet aircraft (VLJ) that will be flying in the future National Airspace System (NAS). The paper addresses the potential impacts of VLJ aircraft... more

This document provides a preliminary order of magnitude estimate of the number of Very Lights Jet aircraft (VLJ) that will be flying in the future National Airspace System (NAS). The paper addresses the potential impacts of VLJ aircraft on NAS airports and recommends actions JPDO should take now to develop the analytical capability to address VLJ. Background: VLJ aircraft have been in development since 1999. There has been considerable skepticism in the aviation community about the technical and economic feasibility of VLJ vehicles. However development has continued with start-up as well as established companies now developing VLJ aircraft. Eclipse announced its entry into the market in 1999, with Adam following in 2001. More recently established aircraft manufacturers announced their entry in to the market: Cessna, in 2002; and recently Embrear, an established air transport manufacturer, announced its entry in the market. In addition, Honda, a Japanese manufacturer of automobiles has developed a flying prototype VLJ, and has used it in image advertising.

2025, Adaptive Agents and Multi-Agents Systems

The success of contract-based multiagent systems relies on agents complying with their commitments. When something goes wrong, the key to diagnosis lies within the commitments' mutual relations as well as their individual states.... more

The success of contract-based multiagent systems relies on agents complying with their commitments. When something goes wrong, the key to diagnosis lies within the commitments' mutual relations as well as their individual states. Accordingly, we explore how commitments are related through the three-agent commitment delegation operation. We then propose exception diagnosis based on such a relation.

2025, HAL (Le Centre pour la Communication Scientifique Directe)

software projects using Simulink or Scade use in fact a subset of Simulink or Scade. The 'alignment' of these two subsets gives rise to a new concept, the 'Unified MBD', whose data-flow part keeps the expressiveness of both languages and... more

software projects using Simulink or Scade use in fact a subset of Simulink or Scade. The 'alignment' of these two subsets gives rise to a new concept, the 'Unified MBD', whose data-flow part keeps the expressiveness of both languages and has been introduced in a former paper. Here we present the control-flow part, which is much more restrictive, because the automata paradigms of Scade and Simulink differ fundamentally. But in fact, at least in our context, this loss of expressiveness is not a weakness, because our current design modelling practice is consistent with these restrictions, and because it opens the road for simple but faithful specification modelling practices.

2025, Journal of Current Science

Data security and privacy are facing major issues due to the Internet of Things' (IoT) rapid expansion and the move away from centralized cloud systems and toward decentralized edge computing. This paper investigates how edge computing... more

Data security and privacy are facing major issues due to the Internet of Things' (IoT) rapid expansion and the move away from centralized cloud systems and toward decentralized edge computing. This paper investigates how edge computing environments might improve IoT service security and privacy with anonymized AI. By striving to protect sensitive data while preserving high performance and efficiency by utilizing methods like homomorphic encryption, secure multi-party computation, and federated learning. This methodology entails extensive testing of AI models, performance assessments, integration of user feedback, and guaranteeing adherence to data protection laws. The results reveal that anonymized AI works well, preserves privacy, and secures Internet of Things data. This suggests that it has a lot of potential for wider real-world applications.

2025, Computer Law & Security Review

Surveillance is becoming ubiquitous in our society. We can also see the emergence of "smart" surveillance technologies and the assemblages (or combinations) of such technologies, supposedly to combat crime and terrorism, but in fact used... more

Surveillance is becoming ubiquitous in our society. We can also see the emergence of "smart" surveillance technologies and the assemblages (or combinations) of such technologies, supposedly to combat crime and terrorism, but in fact used for a variety of purposes, many of which are intrusive upon the privacy of law-abiding citizens. Following the dark days of 9/11, security and surveillance became paramount. More recently, in Europe, there has been a policy commitment to restore privacy to centre stage. This paper examines the legal tools available to ensure that privacy and personal data protection are respected in attempts to ensure the security of our society, and finds that improvements are needed in our legal and regulatory framework if privacy is indeed to be respected by law enforcement authorities and intelligence agencies. It then goes on to argue that privacy impact assessments should be used to sort out the necessity and proportionality of security and surveillance programmes and policies vis-a `-vis privacy.

2025, HAL (Le Centre pour la Communication Scientifique Directe)

HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or... more

HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés.

2025, Collaborative …

This report describes work performed by CDM Technologies Inc. on subcontract to ManTech Advanced Systems International, Inc. (Fairmont, West Virginia), and under sponsorship of the Office of Naval Research (ONR). The principal aim of the... more

This report describes work performed by CDM Technologies Inc. on subcontract to ManTech Advanced Systems International, Inc. (Fairmont, West Virginia), and under sponsorship of the Office of Naval Research (ONR). The principal aim of the SILS (Shipboard Integration of Logistics Systems) project is to provide a decision-support capability for Navy ships that integrates shipboard logistical and tactical systems within a near real-time, automated, computer-based shipboard readiness and situation awareness facility. Specifically, SILS is intended to provide the captain of a ship and his staff with an accurate evaluation of the current condition of the ship, based on the ability of all of its equipment, services and personnel to perform their intended functions. The SILS software system consists of two main subsystems, namely: the SILS IE (Interface Engine) subsystem for information interchange with heterogeneous external applications, developed by ManTech Advanced Systems International; and, the SILS MRAT (Mission Readiness Analysis Toolkit) subsystem for intelligent decision-support with collaborative software agents, developed by CDM Technologies. This report is focused specifically on the technical aspects of the SILS MRAT subsystem. The automated reasoning capabilities of SILS MRAT are supported by a knowledge management architecture that is based on information-centric principles. Such an 1 brought to you by CORE View metadata, citation and similar papers at core.ac.uk

2025, IFAC Proceedings Volumes

This work was carried out as part of the BID-CONICET Project No. 83 "Modeling, Sinrulation and Control of Flexible Manu:fiIcturing Cell. FMC'. It deals with the design and construction of the control system for a Tutorial Flexible... more

This work was carried out as part of the BID-CONICET Project No. 83 "Modeling, Sinrulation and Control of Flexible Manu:fiIcturing Cell. FMC'. It deals with the design and construction of the control system for a Tutorial Flexible Manu:fiIcturing Cell (TFMC). The control system modelling was built on a Data Flow Diagram basis. The implementation itself was carried out with Artificial Intelligence techniques and the computer language used was Prolog. The hardware consisted basically ofPCs and peripheral devices. The control system has been made and tested and is operating at present at the CUDAR

2025, Proceedings of 30th Annual International Symposium on Microarchitecture

Traces are dynamic instruction sequences constructed and cached by hardware. A microarchitecture organized around traces is presented as a means for efficiently executing many instructions per cycle. Trace processors exploit both control... more

Traces are dynamic instruction sequences constructed and cached by hardware. A microarchitecture organized around traces is presented as a means for efficiently executing many instructions per cycle. Trace processors exploit both control flow and data flow hierarchy to overcome complexity and architectural limitations of conventional superscalar processors by ( ) distributing execution resources based on trace boundaries and ( ) applying control and data prediction at the trace level rather than individual branches or instructions. Three sets of experiments using the SPECInt95 benchmarks are presented. (i) A detailed evaluation of trace processor configurations: the results affirm that significant instruction-level parallelism can be exploited in integer programs (2 to 6 instructions per cycle). We also isolate the impact of distributed resources, and quantify the value of successively doubling the number of distributed elements. (ii) A trace processor with data prediction applied to inter-trace dependences: potential performance improvement with perfect prediction is around 45% for all benchmarks. With realistic prediction, gcc achieves an actual improvement of 10%. (iii) Evaluation of aggressive control flow: some benchmarks benefit from control independence by as much as 10%.

2025

Dataflow Models of Computation (MoCs) are widely used in embedded systems, including multimedia processing, digital signal processing, telecommunications, and automatic control. In a dataflow MoC, an application is specified as a graph of... more

Dataflow Models of Computation (MoCs) are widely used in embedded systems, including multimedia processing, digital signal processing, telecommunications, and automatic control. In a dataflow MoC, an application is specified as a graph of actors connected by FIFO channels. One of the most popular dataflow MoCs, Synchronous Dataflow (SDF), provides static analyses to guarantee boundedness and liveness, which are key properties for embedded systems. However, SDF (and most of its variants) lacks the capability to express the dynamism needed by modern streaming applications. In particular, the applications mentioned above have a strong need for reconfigurability to accommodate changes in the input data, the control objectives, or the environment. We address this need by proposing a new MoC called Reconfigurable Dataflow (RDF). RDF extends SDF with transformation rules that specify how the topology and actors of the graph may be reconfigured. Starting from an initial RDF graph and a set ...

2025, Proceedings of the Second International Conference on Informatics in Control, Automation and Robotics

The Distributed Supervision Systems that have been used extensively for the last fifteen years in the process industry are now evolving towards higher level solutions based on better connections between applications and processes that... more

The Distributed Supervision Systems that have been used extensively for the last fifteen years in the process industry are now evolving towards higher level solutions based on better connections between applications and processes that assure that data flows from the process to manage boards. Knowledge sharing seems to be a key issue in integrating these heterogeneous systems. In this paper we present an ontology as a first step to achieving semantic interoperability. The ontology has been conceived within the context of a complex integration problem, in which heterogeneous toolboxes cooperate to deal with several supervision, fault detection and diagnostic tasks for chemical processes. Regarding the current trends in ontology research, our proposal is consistent with top-level ontologies, as these kinds of ontologies seem to overcome the ontology integration problem. We describe a preliminary version of the ontology. The conceptualisation of control variables, system behaviour, supervision tasks, models and system properties is given. All attributes and relationships between each concept has been deployed. The ontology has been developed using Protete2000.

2025, Document Tracking System A Case Study Of University Of Jos

The World Wide Web is an interactive hyper media system built upon the internet with millions of users worldwide that link Governments, businesses, research institutes etc. In setting up a Document Tracking System, the distinctive mission... more

The World Wide Web is an interactive hyper media system built upon the internet with millions of users worldwide that link Governments, businesses, research institutes etc. In setting up a Document Tracking System, the distinctive mission is to record and Tracked Document Movements, record information about them, and makes life easy for the staff of the institution. Hence this application is designed to provide information on how Document are recorded and tracked round the institution. This include individual memos, Staff request for duty allowance, Local purchasing orders, Annual leave application, purchase advance form, Payment vouchers maternity leave etc. A combination of Apache server, MYSQL, chisimba framework and PHP was used to develop an interactive online Document Tracking System for University of Jos.

2025, Lecture Notes in Computer Science

Context-sensitive interprocedural data-flow analysis, for problems where the flow value is a mapping from symbols to values from an abstract domain, requires solutions to be computed for differing aliasing conditions existing at distinct... more

Context-sensitive interprocedural data-flow analysis, for problems where the flow value is a mapping from symbols to values from an abstract domain, requires solutions to be computed for differing aliasing conditions existing at distinct calling contexts. This paper presents an approach for computing context-sensitive solutions to forward, monotone data-flow problems for statically allocated scalar variables that does not require reanalysis of procedures. The algorithm handles dynamic aliasing, due to non-recursive pointer types, and recursion. This paper applies the technique to constant propagation for statically allocated scalars. We propose an elimination-style approach that computes a single canonical Lransferfunction for a procedure, under the assumption that no aliases hold between its arguments (including both explicitly and implicitly passed globals) on entry. The canonical transfer functions are expressed as a set of parameterised data flow mappings, augmented with sequence tokens and alias asserlions. The sequence tokens and alias assertions succinctly capture sufficient control-flow and alias conditions, respectively, so that accurate solutions in the presence of aliasing can be computed from the canonical one. The information represented by the sequence tokens allows approximations that yield conservative solutions, with varying levels of flow and context sensitivity.

2025, Lecture Notes in Computer Science

Context-sensitive interprocedural data-flow analysis, for problems where the flow value is a mapping from symbols to values from an abstract domain, requires solutions to be computed for differing aliasing conditions existing at distinct... more

Context-sensitive interprocedural data-flow analysis, for problems where the flow value is a mapping from symbols to values from an abstract domain, requires solutions to be computed for differing aliasing conditions existing at distinct calling contexts. This paper presents an approach for computing context-sensitive solutions to forward, monotone data-flow problems for statically allocated scalar variables that does not require reanalysis of procedures. The algorithm handles dynamic aliasing, due to non-recursive pointer types, and recursion. This paper applies the technique to constant propagation for statically allocated scalars. We propose an elimination-style approach that computes a single canonical Lransferfunction for a procedure, under the assumption that no aliases hold between its arguments (including both explicitly and implicitly passed globals) on entry. The canonical transfer functions are expressed as a set of parameterised data flow mappings, augmented with sequence tokens and alias asserlions. The sequence tokens and alias assertions succinctly capture sufficient control-flow and alias conditions, respectively, so that accurate solutions in the presence of aliasing can be computed from the canonical one. The information represented by the sequence tokens allows approximations that yield conservative solutions, with varying levels of flow and context sensitivity.

2025, KAMI MENGABDI

Edraw Max is a 2D business technical diagram software that helps create flowcharts, organization charts, mind maps, network diagrams, floor plans, workflow diagrams, business charts and Engineering diagrams, flowcharts, graphs and... more

Edraw Max is a 2D business technical diagram software that helps create flowcharts, organization charts, mind maps, network diagrams, floor plans, workflow diagrams, business charts and Engineering diagrams, flowcharts, graphs and mappings, besides E- Draw Max is also a vector-based diagramming software, which is usually used to make it easier to make a design. One of them is the design of a production process system in a factory that can use this application. In this training, the E-Draw Max tools will focus on making Data Flow Diagrams (DFD), which are useful for describing the flow of the created system, which consists of Context Diagrams, Level Diagrams, and Level n Diagrams. Data Flow Diagram (DFD) is a diagram that describes the flow of data from a process or information system. In DFD, there is information related to the input and output of each of these processes. DFD also has various functions, such as conveying system design, describing systems, and designing models. This ...

2025

In dieser Arbeit wird die Umsetzung eines multitaskingfahigen Betriebssystems fur FPGA-Prozessoren vorgestellt. Mit diesem Betriebssystem ist es moglich, die Resource FPGA-Prozessor durch mehrere unabhangige Anwendungen gleichzeitig zu... more

In dieser Arbeit wird die Umsetzung eines multitaskingfahigen Betriebssystems fur FPGA-Prozessoren vorgestellt. Mit diesem Betriebssystem ist es moglich, die Resource FPGA-Prozessor durch mehrere unabhangige Anwendungen gleichzeitig zu nutzen. Der wesentliche Teil der Arbeit beschaftigt sich mit der schnellen Zustandsermittlung einer Schaltung auf dem FPGA-Baustein und zugleich mit der schnellen Zustandsrekonstruktion bei erneuter Ausfuhrung der Schaltung auf dem FPGA-Prozessor. Basierend auf diesen grundlegenden Funktionen ist ein Betriebssystem entstanden, das sowohl effektiv als auch flexibel die Verarbeitung mehrerer unabhangiger Anwendungen auf verschiedenen FPGA-Prozessoren ermoglicht. Zur weiteren Steigerung der Leistungsfahigkeit wurde im Rahmen der Arbeit ein spezieller multitaskingunterstutzender FPGA-Prozessor erstellt, der durch seinen Aufbau die Prozesswechselzeiten auf ein Minimum reduziert. Durch den neuartigen Aufbau dieses FPGA-Prozessors kann die Ausfuhrung der Anw...

2025, International Computer Music Conference

ChucK re-factors the idea of a computer music language into three orthogonal basis components: unit generator connections that are data-flow only, globally consistent "first-class" time control, and sample-synchronous concurrency. The... more

ChucK re-factors the idea of a computer music language into three orthogonal basis components: unit generator connections that are data-flow only, globally consistent "first-class" time control, and sample-synchronous concurrency. The syntax, semantic, and usage have been discussed in previous works. The focus and contributions of this paper are (1) to examine the philosophies and decisions in the language design (2) to describe ChucK's implementation and runtime model, and (3) to outline potential applications enabled by this framework. We present an experiment in designing a computer music language "from scratch" and show how things work. We hope these ideas may provides an interesting reference for future computer music systems.

2025, ACM SIGGRAPH Computer Graphics

Cameron introdu, es modular visualization environments and /he motivation behind employing the data-flow paradigm and visual programming. In a tr~. ~. data-flow implementation, all modules are pure functions. Hence, processes are stateh,... more

Cameron introdu, es modular visualization environments and /he motivation behind employing the data-flow paradigm and visual programming. In a tr~. ~. data-flow implementation, all modules are pure functions. Hence, processes are stateh, ss with no side-effects (Arvind and Brobst [l I). Consider Figure , a visual program that imports data, compl tes both an isosurface and a planar mapping and renders the results in a single image. Iff agine a set of available processes waiting fo' their inputs from the processes upstream i l the network or asynchronously from ass(<iated input devices or interactors. The Cc Ilect module waits for inputs from the Iso,. urface and MapToPlane modules. When their inputs are received, they run, and when finished they distribute their results to the modul~.s waiting downstream. Import would send i,s results to the waiting Isosurface and MapTo~'lane modules. In effect, this execution mode is entirely data-driven and top-down: the ~.xecution of modules is dependent solely c n the passage of data through the system.

2025, IBM Systems Journal

Business event processing requires efficiently processing live events, computing business performance metrics, detecting business situations, and providing real-time visibility of key performance indicators. Given the high volume of... more

Business event processing requires efficiently processing live events, computing business performance metrics, detecting business situations, and providing real-time visibility of key performance indicators. Given the high volume of events and significant complexity of computation, system performance event throughput is critical. In this paper, we advocate model-analysis techniques to improve event throughput. In the build time, a series of model analyses of the application logic are conducted to understand such factors as runtime data-access path, data flow, and control flow. Such analyses can be used to improve throughput three ways: at build time it can be used to facilitate the generation of customized code to optimize I/O and CPU usage; information about the control flow and data flow can be used to ensure that CPU resources are used effectively by distributing event-processing computation logic evenly over time; and at runtime, knowledge gained from the model can be used to plan multithreaded parallel event-processing execution to reduce wait states by maximizing parallelization and reducing the planning overhead. This paper presents a series of model-analysis techniques and the results of experiments that demonstrate their effectiveness. 1-5 enables processing continuous live events, computing metric values, and detecting situations in real time, thereby supporting applications such as program trading, fraud management, and locationbased services. It represents a new generation of enterprise data management and is gaining considerable momentum in both academia and industry. A user-friendly language is needed to support business event processing. In our design, we use the popular ECA (event condition action) rule-based programming mode with substantial extensions to support the computation of the active metric network. The rule-based programming model allows application developers to realize business rules in event-processing applications, thus freeing them from transforming declarative logic into details of procedural logic.

2025, Korean Economic Review

Human capital is increasingly believed to play an important role in the growth process, however, adequately measuring its stock remains controversial. This paper identifies three general approaches to human capital measurement;... more

Human capital is increasingly believed to play an important role in the growth process, however, adequately measuring its stock remains controversial. This paper identifies three general approaches to human capital measurement; cost-based, income-based and education-based, and presents a critical review of the theories and their applications to data from a range of countries. Emphasis on empirical evidence will be given to the case of New Zealand.

2025, International Journal of Innovative Research in Engineering and Management (IJIREM)

The influence of various failure distribution laws on the reliability of fiber-optic data transmission systems (FODTS) components is analyzed. The article discusses the failure distribution laws applicable to hardware and software... more

The influence of various failure distribution laws on the reliability of fiber-optic data transmission systems (FODTS) components is analyzed. The article discusses the failure distribution laws applicable to hardware and software components of the system, which allow taking into account their interaction and mutual influence on the overall reliability of the system. As a methodology, reliability assessment models are used, in particular, the exponential distribution, the Weibull distribution, and the Musa-Okumoto software reliability model. Formulas for assessing the overall reliability of the system and probability density functions are given, allowing a more complete picture of the stability and reliability of the FODTS under operating conditions.

2025

The objectives of this study are: 1) Improved Integrated Data Bank services for staffing data and information; 2) Enhanced support system for data and information service server usage; 3) Improving data and information services that are... more

The objectives of this study are: 1) Improved Integrated Data Bank services for staffing data and information; 2) Enhanced support system for data and information service server usage; 3) Improving data and information services that are accurate for Staffing Management. The research method used is Research and Development (RD 2) the support system for the use of data and information service servers runs according to the Integrated Data Bank application; 3) Personnel management data and information services can be used as an accurate staffing database and show that the Integrated Data Bank staffing management information system can be used as a supporting management system for staff at SMK Muhammadiyah Karangpucung; 4) Integrated Data Bank Products have gone through the process of design validation, instrument validation, and product validation through judgment experts and experts, which produce data in the form of Data Flow Diagrams (DFD), Unified Modeling Language (UML) design, and...

2025, The Australian surveyor

Current technology and digital products such as digital cadastral maps have put pressure on institutions involved in the cadastral process to cooperate to maximise efficiencies. To discuss ways that the entire cadastral process and its... more

Current technology and digital products such as digital cadastral maps have put pressure on institutions involved in the cadastral process to cooperate to maximise efficiencies. To discuss ways that the entire cadastral process and its users can take advantage of the opportunities offered by the current and possible future technology it is necessary to study the entire cadastral process as a single system. The authors contend that the maintenance of the digital cadastral map in any cadastral system must be viewed as a process that involves all entities in the cadastral process. This paper attempts to provide a framework, which encompasses all the spatial processes of the cadastral system, to discuss and define spatial data maintenance with a focus on the digital cadastral map. The paper proposes the concept of data flows from information systems analysis techniques to establish this discussion framework. The terminology and definitions of spatial cadastral map maintenance are defined as a basis for reviewing the digital cadastral map in a number of different jurisdictions in Australia and Europe. An examination of the processes undertaken by each of the entities or institutions within the cadastral system highlights some possible future spatial data maintenance models.

2025, arXiv (Cornell University)

A demand-driven approach to program analysis computes only the information that is needed to serve a target demand. In contrast, an exhaustive approach computes all information in anticipation of it being used later. Demand-driven methods... more

A demand-driven approach to program analysis computes only the information that is needed to serve a target demand. In contrast, an exhaustive approach computes all information in anticipation of it being used later. Demand-driven methods have primarily been viewed as efficient algorithms as they compute only the information that is required to meet a given set of demands. However, for a given set of demands, they are believed to compute the same information that would be computed by the corresponding exhaustive methods. We investigate the precision and bidirectional nature of demand-driven methods and show that: (a) demanddriven methods can be formalized inherently as bidirectional data flow analysis, and (b) for some analyses, demand-driven method can compute more precise information than the corresponding exhaustive method. The formalization as a bidirectional analysis follows because the demands are propagated against the control flow and the information to satisfy the demands is propagated along the control flow. We extend the formalization of the Meet Over Paths solution to bidirectional flows, by introducing the concept of qualified control flow paths to explicate the forward and backward flows. This formalization helps us to prove the soundness and precision of our analysis. The approximation caused by data abstraction used for heap locations (e.g. allocation-site-based abstraction or type-based abstraction) is a source of imprecision in pointer analysis. Since a demand-driven method computes information for a smaller set of demands, it has to deal with less imprecision caused by the data abstractions. This could lead to more precise results than an exhaustive method. We show that while this is indeed the case for Java, for C/C++, the precision critically hinges on how indirect assignments are handled. We use this insight and propose a demand-driven alias analysis that is more precise than an exhaustive analysis for C/C++ too. We have chosen static resolution of virtual function calls as an application to demonstrate the precision gain of our demand-driven alias analysis for C++. Our measurements show that our method is more precise and more efficient (for both allocation-site-based and type-based abstractions) than both, the existing demand-driven method, as well as the corresponding exhaustive method. This precision is measured in terms of the number of monomorphic call-sites, the number of virtual call edges, and the number of class types discovered by the methods.

2025

I declare that this written submission represents my ideas in my own words and where others' ideas or words have been included, I have adequately cited and referenced the original sources. I also declare that I have adhered to all... more

I declare that this written submission represents my ideas in my own words and where others' ideas or words have been included, I have adequately cited and referenced the original sources. I also declare that I have adhered to all principles of academic honesty and integrity and have not misrepresented or fabricated or falsified any idea/data/fact/source in my submission. I understand that any violation of the above will be cause for disciplinary action by the Institute and can also evoke penal action from the sources which have thus not been properly cited or from whom proper permission has not been taken when needed.

2025, ACM SIGPLAN Notices

Research in bidirectional data flow analysis seems to have come to a halt due to an impression that the case for bidirectional data flow analysis has been considerably weakened by a plethora of investigations based on decomposability of... more

Research in bidirectional data flow analysis seems to have come to a halt due to an impression that the case for bidirectional data flow analysis has been considerably weakened by a plethora of investigations based on decomposability of known bidirectional placement algorithms into a sequence of purely unidirectional components. This paper shows that the approach of decomposability is not general enough in that it derives its power from the simplifying graph transformation of edge-splitting and the favourable nature of flows in partial redundancy elimination (PRE). This follows from the fact that in the absence of edge-splitting, PRE cannot be performed using a sequence of cascaded unidirectional flows. Further, edge-splitting inherently converts data flows involved in PRE into unidirectional flows. In our opinion, this obviates the need of an alternative formulation. We also show that edge-splitting cannot convert data flows involved in "truly" bidirectional data flow pro...

2025, Computer Languages, Systems & Structures

Tennenbaum's data ow analysis based formulation of type inferencing is termed bidirectional in the "Dragon Book"; however, it fails to qualify as a formal data ow framework and is not amenable to complexity analysis. Further, the types... more

Tennenbaum's data ow analysis based formulation of type inferencing is termed bidirectional in the "Dragon Book"; however, it fails to qualify as a formal data ow framework and is not amenable to complexity analysis. Further, the types discovered are imprecise. Here, we deÿne a formal data ow framework (based on bidirectional data ow analysis) which discovers more precise type information and is amenable to complexity analysis. We compare data ow analyses with the more general constraint-based analyses and observe that data ow analyses represent program analyses without unbounded auxiliary store. We show that if unlimited auxiliary store is allowed then no data ow analysis would need more than two passes; if auxiliary store is disallowed then type inferencing requires bidirectional data ow analysis.

2025, ACM Transactions on Programming Languages and Systems

The classical theory of data flow analysis, which has its roots in unidirectional flows, is inadequate to characterize bidirectional data flow problems. We present a generalized theory of bit vector data flow analysis which explains the... more

The classical theory of data flow analysis, which has its roots in unidirectional flows, is inadequate to characterize bidirectional data flow problems. We present a generalized theory of bit vector data flow analysis which explains the known results in unidirectional and bidirectional data flows and provides a deeper insight into the process of data flow analysis. Based on the theory, we develop a worklist-based generic algorithm which is uniformly applicable to unidirectional and bidirectional data flow problems. It is simple, versatile, and easy to adapt for a specific problem. We show that the theory and the algorithm are applicable to all bounded monotone data flow problems which possess the property of the separability of solution. The theory yields valuable information about the complexity of data flow analysis. We show that the complexity of worklist-based iterative analysis is the same for unidirectional and bidirectional problems. We also define a measure of the complexity...

2025, IEEE Transactions on Very Large Scale Integration Systems

A folded very large scale integration (VLSI) architecture is presented for the implementation of the two-dimensional discrete wavelet transform, without constraints on the choice of the wavelet-filter bank. The proposed architecture is... more

A folded very large scale integration (VLSI) architecture is presented for the implementation of the two-dimensional discrete wavelet transform, without constraints on the choice of the wavelet-filter bank. The proposed architecture is dedicated to flexible block-oriented image processing, such as adaptive vector quantization used in wavelet image coding. We show that reading the image along a two-dimensional (2-D) pseudo-fractal scan creates a very modular and regular data flow and, therefore, considerably reduces the folding complexity and memory requirements for VLSI implementation. This leads to significant area savings for on-chip storage (up to a factor of two) and reduces the power consumption. Furthermore, data scheduling and memory management remain very simple. The end result is an efficient VLSI implementation with a reduced area cost compared to the conventional approaches, reading the input data line by line.

2025, IEEE Transactions on Very Large Scale Integration (VLSI) Systems

A folded very large scale integration (VLSI) architecture is presented for the implementation of the two-dimensional discrete wavelet transform, without constraints on the choice of the wavelet-filter bank. The proposed architecture is... more

A folded very large scale integration (VLSI) architecture is presented for the implementation of the two-dimensional discrete wavelet transform, without constraints on the choice of the wavelet-filter bank. The proposed architecture is dedicated to flexible block-oriented image processing, such as adaptive vector quantization used in wavelet image coding. We show that reading the image along a two-dimensional (2-D) pseudo-fractal scan creates a very modular and regular data flow and, therefore, considerably reduces the folding complexity and memory requirements for VLSI implementation. This leads to significant area savings for on-chip storage (up to a factor of two) and reduces the power consumption. Furthermore, data scheduling and memory management remain very simple. The end result is an efficient VLSI implementation with a reduced area cost compared to the conventional approaches, reading the input data line by line.

2025, IEEE Transactions on Circuits and Systems for Video Technology

2025

Design of large dependable real-time embedded systems relies on integration of software elements onto a set of hardware nodes. In this paper an overall process of how to generate a Platform Specific Model (PSM) from a set of Platform... more

Design of large dependable real-time embedded systems relies on integration of software elements onto a set of hardware nodes. In this paper an overall process of how to generate a Platform Specific Model (PSM) from a set of Platform Independent Models (PIM) and a hardware resource description is presented in the course of designing such embedded systems. Subsystems with different requirements are allocated and integrated onto shared hardware resources based on the constraints of the specified HW resources. The focus of this paper lies on developing the methodologies and tools for the generation of the PSM. Ac- cordingly a dual-track approach has been proposed. First track is based on a transformational process and the second is a Multi Variable Optimization (MVO) approach.

2025

Most power systems today are operated at the capacity limit. The power flow has to be guaranteed by using the data flow between all the components of the power system management system consisting of all electronics acquiring data, making... more

Most power systems today are operated at the capacity limit. The power flow has to be guaranteed by using the data flow between all the components of the power system management system consisting of all electronics acquiring data, making decisions, issuing commands, making simulations and archiving, etc. The main goal is to secure the supply and quality of power. Therefore, actual data have to be exchanged including its semantics that all users of this information understand its meaning. This is especially necessary in emergency cases. The source and the sink of all process data is the substation with its automation system. Therefore, the standard IEC 61850 for communication in substations is the key for a lot of solutions for the power system since it standardizes not only the communication but also provides an object oriented data model applicable to the complete power system. It is shown that all requirements are fulfilled by this standard. Also the substation configuration description language of the standard is applicable to the system wide configuration. It supports the optimal utilization, operation and maintenance of the power system.

2025, JURNAL PENGABDIAN MASYARAKAT AKADEMISI

Currently, the implementation of IoT devices has been widely utilized by various parties in supporting the increasingly complex aspects of human life. IoT devices have become a favorite among hardware and software developers due to their... more

Currently, the implementation of IoT devices has been widely utilized by various parties in supporting the increasingly complex aspects of human life. IoT devices have become a favorite among hardware and software developers due to their ease of use and flexibility in adapting to the evolving systems of today. One of the uses of IoT is in the realm of digital economics, where digital economy represents a new way of prioritizing technology and ease in the buying and selling transaction process between sellers and buyers. In the past, payment processes for goods or services used to involve paper money and coins, but nowadays, payments are made using digital media. However, the current digital payment methods still rely on centralized storage technology. Therefore, if the data held at the center is damaged or compromised, it can disrupt the flow of financial transactions, and user data privacy becomes a subsequent concern.