Greedy Algorithms Research Papers - Academia.edu (original) (raw)

A Minimum Vertex Cover is the smallest set of vertices whose removal completely disconnects a graph. In this paper, we perform experiments on a number of graphs from standard complex networks databases addressing the problem of finding a... more

A Minimum Vertex Cover is the smallest set of vertices whose removal completely disconnects a graph. In this paper, we perform experiments on a number of graphs from standard complex networks databases addressing the problem of finding a "good" vertex cover (finding an optimum is a NP-Hard problem). In particular, we take advantage of the ubiquitous power law distribution present on many complex networks. In our experiments, we show that running a greedy algorithm in a power law graph we can obtain a very small vertex cover typically about 1.02 times the theoretical optimum. This is an interesting practical result since theoretically we know that: (1) In a general graph, on n vertices a greedy approach cannot guarantee a factor better than ln n; (2) The best approximation algorithm known at the moment is very involved and has a much larger factor of [Formula: see text]. In fact, in the context of approximation within a constant factor, it is conjectured that there is no (2...

Abstract. Distributed denial of service (DDoS) attacks are a serious problem in the present-day Internet. We consider the design of a scalable agent-based system for collecting information about the structure and dynamics of DDoS attacks.... more

Abstract. Distributed denial of service (DDoS) attacks are a serious problem in the present-day Internet. We consider the design of a scalable agent-based system for collecting information about the structure and dynamics of DDoS attacks. Our system requires ...

The travelling salesman problem (TSP) is a NP-hard problem in combinatorial optimization. A solution to this problem can be successfully used in touring airports to find shortest routes through selections of airports in the world. The... more

The travelling salesman problem (TSP) is a NP-hard problem in combinatorial optimization. A solution to this problem can be successfully used in touring airports to find shortest routes through selections of airports in the world. The travelling salesman problem can be solved by using ant colony optimization, genetic algorithms, simulation annealing, etc. However these approaches require advanced data structures and they are hard to implement using simple programming languages such as php and javascript. Furthermore these conventional approaches use advanced mathematical concepts which cannot be understood by researchers without strong mathematical background. The greedy approach can also be used to find an approximation to the shortest route. The greedy strategy follows the heuristic that at each stage, visit an unvisited city nearest to the current city. However, the pure use of greedy algorithm may fail to produce optimal solution and get stuck in a suboptimal solution. The objective of this study is to provide a solution to overcome this drawback in the greedy approach in solving the travelling salesmen problem. First, the route is found using greedy approach. Then the route is modified at appropriate positions so that the length is reduced. The benefits of the proposed solution are threefold: (1) it reduces the use of computational resources and gives better results than using the pure greedy approach; (2) it uses the array data structure which can be easily implemented by using any programming language; (3) it can be easily understood and modified by researchers from non mathematics background.

BFS, DFS, Bidirectional Search, Iterative Deepening Search. Depth Limited Search, Uniform Cost Search, Greedy Best-First Search, A* Search, Iterative Deepening, Local Search Algorithms, Hill-Climbing Search, Simulated Annealing Search,... more

BFS, DFS, Bidirectional Search, Iterative Deepening Search. Depth Limited Search, Uniform Cost Search, Greedy Best-First Search, A* Search, Iterative Deepening, Local Search Algorithms, Hill-Climbing Search, Simulated Annealing Search, Local Beam Search, Heuristic Function.

Congklak or Dakon is one of traditional games in Indonesia that is generally played by two players with the goal of obtaining congklak seeds as much as possible by following the congklak game’s rules. In this paper we discuss the use of... more

Congklak or Dakon is one of traditional games in Indonesia that is generally played by two players with the goal of obtaining congklak seeds as much as possible by following the congklak game’s rules. In this paper we discuss the use of greedy best-first search algorithm that is used by the computer’s player for selecting the hole that can produce an optimal solution. The player can make a choice whether to get a seed as much as possible or maintain the vulnerable contains holes that can be shot by the opponent. The test results show that the greedy algorithm is able to produce quite good local solutions, although not necessarily optimal global solutions.

Undeva, prin preajma anului '94, un elev de clasa a XI-a se gândea să-şi construiască un unit Pascal care să îl ajute la implementarea rapidă a metodei backtracking. Pentru aceasta avea la îndemână un singur atu, care era repetat la... more

Abstract—Traveling Salesman Problem (TSP) merupakan salah satu masalah optimasi klasik dengan konsep yang sederhana namun rumit dipecahkan secara konvensional. Tujuannya agar menememukan rute perjalanan terpendek untuk melewati... more

Abstract—Traveling Salesman Problem (TSP) merupakan salah
satu masalah optimasi klasik dengan konsep yang sederhana
namun rumit dipecahkan secara konvensional. Tujuannya agar
menememukan rute perjalanan terpendek untuk melewati
sejumlah kota dengan jalur tertentu sehingga setiap kota hanya
terlewati satu kali dan perjalanan diakhiri dengan kembali ke
kota semula. Proses penelitian menggunakan data input berupa
kota dan jarak antar kota kemudian diolah menggunakan
algoritma greedy dengan penanda sehingga didapatkan jalur
terpedek dan total biayanya. Penanda dalam algoritma
digunakan agar setiap kota hanya sekali dikunjungi. Pendekatan
algoritma greedy dengan penanda memberikan solusi yang
mempunyai kompleksitas waktu komputasi jauh lebih singkat
dibandingkan dengan sejumlah algoritma lain seperti Algoritma
Brute Force dan Dynamic Programming.
Keywords: Traveling Salesman Problem; Algoritma Greedy;
Penanda; Kompleksitas

Abstract - Knapsack is a container used for storing objects of the same size or less in some capacity . The problem that often arises when searching for the optimal choice of the object to be inserted into a container with limited... more

Permainan Yu-Gi-Oh Capsule Monster merupakan board game yang unik. Pada permainan ini dibutuhkan taktik atau strategi yang baik. Dalam sesi pertandingan akan mempengaruhi hasil kemenangan, namun tidak kalah penting sebelum pertandingan... more

Permainan Yu-Gi-Oh Capsule Monster merupakan board game yang unik. Pada permainan ini dibutuhkan taktik atau strategi yang baik. Dalam sesi pertandingan akan mempengaruhi hasil kemenangan, namun tidak kalah penting sebelum pertandingan yaitu memilih monster yang cocok dan terbaik untuk melawan musuhnya. Dengan algoritma greedy akan dipilih monster terbaik untuk melawan musuh. Monster akan dipilih menggunakan parameter tertentu sehingga mencari parameter dan batasan yang cocok untuk menghasilkan pemilihan monster yang terbaik.

Paper ini akan membahas sebuah teknik penggunaan algoritma Greedy untuk menyelesaikan permainan Domino. Algoritma greedy dipillih karena pemahaman dan implementasi nya yang mudah walaupun seringkali hasilnya tidak optimal. Walaupun banyak... more

Paper ini akan membahas sebuah teknik penggunaan algoritma Greedy untuk menyelesaikan permainan Domino. Algoritma greedy dipillih karena pemahaman dan implementasi nya yang mudah walaupun seringkali hasilnya tidak optimal. Walaupun banyak strategi untuk menyelesaikan permainan domino, tapi tidak ada yang pasti bisa dipakai untuk memenangkan permainan.

Greedy method is the most popular method for solving the optimization problem, it is often also used to obtain the optimal solution of a problem. Issues optimization (optimization problems) is a matter of looking for the optimum solution.... more

Greedy method is the most popular method for solving the optimization
problem, it is often also used to obtain the optimal solution of a problem.
Issues optimization (optimization problems) is a matter of looking for the
optimum solution. There are two kinds of optimization problems, namely
maximization (maximization) and Minimize (minimization). The problem
can be solved in such Greedy method is an optimization problem in
exchange of money. In this journal, taken a case on the issue of how to
optimize the exchange of money amounting to 125 thousand dollars?

Orthogonal matching pursuit (OMP) is the canonical greedy algorithm for sparse approximation. In this paper we demonstrate that the restricted isometry property (RIP) can be used for a very straightforward analysis of OMP. Our main... more

Orthogonal matching pursuit (OMP) is the canonical greedy algorithm for sparse approximation. In this paper we demonstrate that the restricted isometry property (RIP) can be used for a very straightforward analysis of OMP. Our main conclusion is that the RIP of order K+1 (with isometry constant δ 1 / (3 K^(1/2))) is sufficient for OMP to exactly recover any K-sparse signal. The analysis relies on simple and intuitive observations about OMP and matrices which satisfy the RIP. For restricted classes of K-sparse signals (those that are highly compressible), a relaxed bound on the isometry constant is also established. A deeper understanding of OMP may benefit the analysis of greedy algorithms in general. To demonstrate this, we also briefly revisit the analysis of the regularized OMP (ROMP) algorithm.

In our information societies, we increasingly delegate tasks and decisions to automated systems, devices and agents that mediate human relationships, by taking decisions and acting on the basis of algorithms. Their increased intelligence,... more

In our information societies, we increasingly delegate tasks and decisions to automated systems, devices and agents that mediate human relationships, by taking decisions and acting on the basis of algorithms. Their increased intelligence, autonomous behavior and connectivity are changing crucially the life conditions of human beings as well as altering traditional concepts and ways of understanding reality. Algorithms are directed to solve problems that are not always detectable in their own relevance and timeliness. They are also meant to solve those problems through procedures that are not always visible and assessable in their own. In addition, technologies based on algorithmic procedures more and more infer personal information from aggregated data, thus profiling human beings and anticipating their expectations, views and behaviors. This may have normative, if not discriminatory, consequences. While algorithmic procedures and applications are meant to serve human needs, they risk to create an environment in which human beings tend to develop adaptive strategies by conforming their behaviour to the expected output of the procedures, with serious distortive effects. Against this backdrop, little room is often left for a process of rational argumentation able to challenge the results of algorithmic procedures by putting into question some of their hidden assumptions or by taking into account some neglected aspects of the problems under consideration. At the same time, it is widely recognized that scientific and social advances crucially depend on such an open and free critical discussion.

Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. There is an urgent need for a new generation of computational theories and tools to assist... more

Data mining and knowledge discovery in databases have been attracting a significant amount of research, industry, and media attention of late. There is an urgent need for a new generation of computational theories and tools to assist researchers in extracting useful information from the rapidly growing volumes of digital data.

— In this paper, we have compared these three methods (Pareto, Knapsack problem (KP) and Greedy Algorithm) in order to deduce which among them is the most effective. The purpose of this study is to remedy halts and unforeseen breakdowns... more

— In this paper, we have compared these three methods (Pareto, Knapsack problem (KP) and Greedy Algorithm) in order to deduce which among them is the most effective. The purpose of this study is to remedy halts and unforeseen breakdowns on chains of production as well as to increase the efficiency of production, which is the major concern of the industry leaders. The choice of method of maintenance management directly affects the stability of the productivity and the cost of maintenance actions. We will compare the results obtained from these three methods: Pareto, which is widely used in the maintenance; Knapsack Problem, which is widely used in Freight management; and Greedy Algorithm, which is used in scheduling. We will apply these three methods to an actual case study in order to improve industrial maintenance structure in the production of soft drinks. The synthesis will be based on two variables: the number of failure times for each machine for two months and the budget to repair them. We have shown in this document that the Greedy algorithm for scheduling gave beneficial results of 3.50% more than those obtained using the Knapsack problem (KP) for the management of transportation of the goods. In the same perceptual using, the latter gave results, which advance 8% visa -vis the use of Pareto method that is specific to industrial maintenance. Our future goal is to design a new method that incorporates the most effective elements from each of these three existing methods.

In this paper, we propose an approach for fast pedestrian detection in images. Inspired by the histogram of oriented gradient (HOG) features, a set of multi-scale orientation (MSO) features are proposed as the feature representation. The... more

In this paper, we propose an approach for fast pedestrian detection in images. Inspired by the histogram of oriented gradient (HOG) features, a set of multi-scale orientation (MSO) features are proposed as the feature representation. The features are extracted on square image blocks of various sizes (called units), containing coarse and fine features in which coarse ones are the unit orientations and fine ones are the pixel orientation histograms of the unit. A cascade of Adaboost is employed to train classifiers on the coarse features, aiming to high detection speed. A greedy searching algorithm is employed to select fine features, which are input into SVMs to train the fine classifiers, aiming to high detection accuracy. Experiments report that our approach obtains state-of-art results with 12.4 times faster than the SVM+HOG method.

ABC Appliances (Pvt) Ltd.,one of the leading companies in Sri Lanka has supplied and installed a very large number of air condition units all over the country. The company is currently provides a comprehensive after sales service for its... more

ABC Appliances (Pvt) Ltd.,one of the leading companies in Sri Lanka has supplied and installed a very
large number of air condition units all over the country. The company is currently provides a
comprehensive after sales service for its customers. At present the service department is interesting in
reducing the cost involving in regular after sale servicers. In this research we proposed a Travelling
Salesman Problem (TSP) approach tominimize the cost involving in service tours. We used nearest
neighbourhood search algorithm to obtain the solutions to the TSP. Computational examples show that the
new service routes obtained using this algorithm will reduce the travelling cost significantly in comparison
to existing routs.

Compressive sensing (CS) has recently emerged as a framework for efficiently capturing signals that are sparse or compressible in an appropriate basis. While often motivated as an alternative to Nyquist-rate sampling, there remains a gap... more

Compressive sensing (CS) has recently emerged as a framework for efficiently capturing signals that are sparse or compressible in an appropriate basis. While often motivated as an alternative to Nyquist-rate sampling, there remains a gap between the discrete, finite-dimensional CS framework and the problem of acquiring a continuous-time signal. In this paper, we attempt to bridge this gap by exploiting the Discrete Prolate Spheroidal Sequences (DPSS's), a collection of functions that trace back to the seminal work by Slepian, Landau, and Pollack on the eff ects of time-limiting and bandlimiting operations. DPSS's form a highly efficient basis for sampled bandlimited functions; by modulating and merging DPSS bases, we obtain a dictionary that o ffers high-quality sparse approximations for most sampled multiband signals. This multiband modulated DPSS dictionary can be readily incorporated into the CS framework. We provide theoretical guarantees and practical insight into the use of this dictionary for recovery of sampled multiband signals from compressive measurements.

The increasing complexity of new applications means GUIs are also getting more complex, and generating tests cases manually for them becomes harder. Generating automatic, good quality GUI test cases is a growing concern in application... more

The increasing complexity of new applications means GUIs are also getting more complex, and generating tests cases manually for them becomes harder. Generating automatic, good quality GUI test cases is a growing concern in application testing. Actions performed by the user on the GUI can be regarded as events, which can be performed in sequences, forming a graph of event sequences, and therefore multiple execution paths or routes, known as test cases, are possible. The quality of a set of test cases is measured by the coverage criteria (all actions or events must be performed at least one time in the set), which depend on the length and partial coverage of each execution path. Finding feasible paths and complying with the coverage criteria is a highly combinatorial problem. For such problems, due to high computing power that it would take to find an exact solution, it is well justified to use heuristics and metaheuristics algorithms, allowing us to find approximate solutions of good quality. Those methods have been successfully used in chemistry, physics, biology, and recently, in software engineering. In this paper, the use of a metaheuristic known as Ant Colony Optimization Algorithm (ACO) for generating test cases is proposed. The ACO metaheuristic has been adapted in order to find individual routes that could lead to a set of test cases of good quality. A individual test, path or route is desirable if it is long (it tests a lot of events or actions) and do not share events (or share few events) with other paths. After a appropriate number of candidate test cases are generated, we express the problem of generating a set of test cases as a set covering problem and then we apply a greedy algorithm to solve it. The result is a set of paths (test cases) with full covering of events with small number of test cases. We present also a problem solved by our method, generating test cases for Windows Wordpad, and discuss the results.

We present a novel graph-based approach to find the optimal set of influencers from a large pool of influencers. The goal is to select minimum number of influencers that can reach the desired audience. In order to find such a set, one has... more

We present a novel graph-based approach to find the optimal set of influencers from a large pool of influencers. The goal is to select minimum number of influencers that can reach the desired audience. In order to find such a set, one has to compute the reach of all possible combinations of available influencers resulting in complexity of order O (n 2). Our proposed greedy approach selects the pair of influencers that results in highest reach at every iteration reducing the complexity to O(n 2). Our work is complimented with analysis of 550 Instagram influencers and over 100,000 post. After the analysis, we concluded that influencers who prefer quality over quantity receives better engagement. Influencers sharing 3 posts per week and posts with caption length of over 500 characters relatively received better engagement numbers. 1.INTRODUCTION With 3G/4G telecom services now common andoffered at among the world's most affordable rates in Pakistan, 67 million locals have mobile broadband internet access. According to a Gallup&Gillani poll, 48% of internet users consume online content daily[1]. Onecontent producer category is the "Social/Digital Influencer" who are followed by 'fans' numbering from a few thousands (micro-influencers) to millions (celebrities). Their digital content, whether pictures, videos andlive streams, are delivered ondigital and social channels-Instagram, Facebook, YouTube, Beigo, TikTok-and cover a wide range of genres from lifestyle, fashion, health to entertainment and comedy.Brands can use social and digital influencers to launch and manage effective organic marketing campaigns. Pakistan's SMEs, comprising 98%+ of all companies, actively use influencer marketing as a cost-effective strategy to engage and sell in/directly to Pakistan's online population[2], an estimated "trade" value at PKR 5 billion*/year[3]. For sellers, influencers represent an impactful and efficient outreach channel as they affect purchase behaviors and choices, brand perceptions and even the national narratives of ~30% of Pakistan's population[4]. A key challenge is tooptimize influencer marketing budgets bycost-efficiently identifying the most optimal set ofinfluencerswho can effectively reach desired audiences, fromprospects, buyers to promoters. To only consider an influencer's number of followers is misleading as there is an underlying overlap between the followers of influencerswho have contextual similarities-that is two influencers with the same city, age group, same genre/category and content themes are likely to appeal to the same people following them both. To solve this overlap problem, we are focusing on one of the most influencer-driven social networks,Instagram.We propose a Greedy Algorithmtorationalize the minimum number of influencers to attain the best reach.

Navigating most effective route to face the congestion is very required for all roads' users. The current method to deal with the finding of effective route to avoid congestion is still too complex and incapable to deal with instantaneous... more

Navigating most effective route to face the congestion is very required for all roads' users. The current method to deal with the finding of effective route to avoid congestion is still too complex and incapable to deal with instantaneous roads' condition which always changes over time. In navigating the best route, repeated calculation is always there. That problem necessitates the efficient algorithm to deal with the routing. In this paper, we offer the approach of the use of greedy algorithms which are un-optimal greedy algorithm, djikstra algorithm, and A* algorithm. By comparing the complexity and result of application in real case, we can produce which one the best approach to implement. The proposed algorithms deal with finding the best cost in which do adaptations with the roads' condition as the volume of vehicles are changing dynamically. Then, the result from the comparison among the greedy algorithms will give us the best option which algorithm to implement in the navigating the most effective way to face the congestion.

In on-grid microgrids, electric vehicles (EVs) have to be efficiently scheduled for costeffective electricity consumption and network operation. The stochastic nature of the involved parameters along with their large number and... more

In on-grid microgrids, electric vehicles (EVs) have to be efficiently scheduled for costeffective electricity consumption and network operation. The stochastic nature of the involved parameters along with their large number and correlations make such scheduling a challenging task. This paper aims at identifying pertinent innovative solutions for reducing the relevant total costs of the on-grid EVs within hybrid microgrids. To optimally scale the EVs, a heuristic greedy approach is considered. Unlike most existing scheduling methodologies in the literature, the proposed greedy scheduler is model-free, training-free, and yet efficient. The proposed approach considers different factors such as the electricity price, on-grid EVs state of arrival and departure, and the total revenue to meet the load demands. The greedy-based approach behaves satisfactorily in terms of fulfilling its objective for the hybrid microgrid system, which is established of photovoltaic, wind turbine, and a
local utility grid. Meanwhile, the on-grid EVs are being utilized as an energy storage exchange location. A real time hardware-in-the-loop experimentation is comprehensively conducted to maximize the earned profit. Through different uncertainty scenarios, the ability of the proposed greedy approach to obtain a global optimal solution is assessed. A data simulator was developed for the purposes of generating evaluation datasets, which captures uncertainties in the behaviors of the system’s parameters. The greedy-based strategy is considered applicable, scalable, and efficient in terms of total operating expenditures. Furthermore, as EVs penetration became more versatile, total expenses decreased significantly. Using simulated data of an effective operational duration of 500 years, the proposed approach succeeded in cutting down the energy consumption costs by about 50–85%, beating existing state-of-the-arts results. The proposed approach is proved to be tolerant to the large amounts of uncertainties that are involved in the system’s operational data.

Abstract—We study the problem of landmark selection for landmark-based routing in a network of fixed wireless com-munication nodes. We present a distributed landmark selection algorithm that does not rely on global clock synchronization,... more

Abstract—We study the problem of landmark selection for landmark-based routing in a network of fixed wireless com-munication nodes. We present a distributed landmark selection algorithm that does not rely on global clock synchronization, and a companion local greedy ...