Van Ray Gultom - Academia.edu (original) (raw)

Papers by Van Ray Gultom

Research paper thumbnail of Robust Bidirectional Search via Heuristic Improvement

Although the heuristic search algorithm A* is well-known to be optimally efficient, this result e... more Although the heuristic search algorithm A* is well-known to be optimally efficient, this result explicitly assumes forward search. Bidirectional search has long held promise for surpassing A*'s efficiency, and many varieties have been proposed, but it has proven difficult to achieve robust performance across multiple domains in practice. We introduce a simple bidirectional search technique called Incremental KKAdd that judiciously performs backward search to improve the accuracy of the forward heuristic function for any search algorithm. We integrate this technique with A*, assess its theoretical properties, and empirically evaluate its performance across seven benchmark domains. In the best case, it yields a factor of six reduction in node expansions and CPU time compared to A*, and in the worst case, its overhead is provably bounded by a user-supplied parameter, such as 1%. Viewing performance across all domains, it also surpasses previously proposed bidirectional search algorithms. These results indicate that Incremental KKAdd is a robust way to leverage bidirectional search in practice.

Research paper thumbnail of Iterative-Deepening Search with On-line Tree Size Prediction

The memory requirements of best-first graph search algorithms such as A* often prevent them from ... more The memory requirements of best-first graph search algorithms such as A* often prevent them from solving large problems. The best-known approach for coping with this issue is iterative deepening, which performs a series of bounded depth-first searches. Unfortunately, iterative deepening only performs well when successive cost bounds visit a geometrically increasing number of nodes. While it happens to work acceptably for the classic sliding tile puzzle, IDA* fails for many other domains. In this paper, we present an algorithm that adaptively chooses appropriate cost bounds on-line during search. During each iteration, it learns a model of the search tree that helps it to predict the bound to use next. Our search tree model has three main benefits over previous approaches: 1) it will work in domains with real-valued heuristic estimates, 2) it can be trained on-line, and 3) it is able to make predictions with only a small number of training examples. We demonstrate the power of our improved model by using it to control an iterative-deepening A* search on-line. While our technique has more overhead than previous methods for controlling iterative-deepening A*, it can give more robust performance by using its experience to accurately double the amount of search effort between iterations.

Research paper thumbnail of Index Term— Interative Deepining Bi-directional Heuristic Front-to-Front Algorithm (IDBHFFA), Bi-directional Heuristic Front-to-Front Algorithm (BHFFA), Bi-directional Depth-First Iterative Deepening (DFID) S earch, Bi-directional Heuristic Path

Artificial Intelligence (AI) is a subject that studies techniques for making computers exhibit in... more Artificial Intelligence (AI) is a subject that studies techniques for making computers exhibit intelligent behavior. S earching still remains one of the problem in AI. Bi -directional search is performed by searching simultaneously in forward direction from the initial node and in backward direction from the goal node. Bi-directional heuristic search algorithms need less time and space than their unidirectional versions. Bi -directional Heuristic Front to Front Algorithm (BHFFA) is one of the Bidirectional heuristic search algorithm. However, it has some disadvantages. It needs to store many unnecessary nodes prior to termination. Moreover, in large problem spaces the computational overhead for the selection of the next node to be expanded increases significantly. This paper presents a modification to the BHFFA called Iterative Deepening Bidirectional Heuristic Front-to-Front Algorithm (IDBHFFA) that has been analyzed and implemented using the 8-puzzle problem. The proposed algorithm performs BHFFA in a number of iterations. For each iteration, two thresholds are maintained, one for each search frontier. In each iteration, some non-promising nodes on a particular search frontier having cost estimates greater than the corresponding threshold value are pruned. The process continues with successive iterations with the thresholds increasing with each iteration. The proposed algorithm will return optimal solutions with an admissible heuristic function. It can minimize the computational time and memory space requirement of BHFFA considerably.

Research paper thumbnail of Distributed Memory Breadth-First Search Revisited: Enabling Bottom-Up Search

Breadth-first search (BFS) is a fundamental graph primitive frequently used as a building block f... more Breadth-first search (BFS) is a fundamental graph primitive frequently used as a building block for many complex graph algorithms. In the worst case, the complexity of BFS is linear in the number of edges and vertices, and the conventional top-down approach always takes as much time as the worst case. A recently discovered bottom-up approach manages to cut down the complexity all the way to the number of vertices in the best case, which is typically at least an order of magnitude less than the number of edges. The bottom-up approach is not always advantageous, so it is combined with the top-down approach to make the direction-optimizing algorithm which adaptively switches from top-down to bottom-up as the frontier expands. We present a scalable distributed-memory parallelization of this challenging algorithm and show up to an order of magnitude speedups compared to an earlier purely top-down code. Our approach also uses a 2D decomposition of the graph that has previously been shown to be superior to a 1D decomposition. Using the default parameters of the Graph500 benchmark, our new algorithm achieves a performance rate of over 240 billion edges per second on 115 thousand cores of a Cray XE6, which makes it over 7× faster than a conventional top-down algorithm using the same set of optimizations and data distribution.

Research paper thumbnail of Computing Breadth First Search in Large Graph Using hMetis Partitioning

In this paper, we present a new algorithm for the Breadth first search traversing based on hMetis... more In this paper, we present a new algorithm for the Breadth first search traversing based on hMetis partitioning. hMetis is a hyper graph partitioning algorithm that divides the massive graph into sub-graphs. Our heuristic approach of computing the breadth first search (h_BFS) starts at the partition that contains the source node then processing the other fragments piece by piece based on the border edges. Experimental results show that proposed algorithm is simpler and faster of traversing the large graph that does not fit in the memory based on the time and the number of I/O operations.

Research paper thumbnail of A Modified Multiple Depth First Search Algorithm for Grid Mapping Using Mini-Robots Khepera

Research paper thumbnail of DEPTH-FIRST SEARCH AND LINEAR GRAPH ALGORITHMS

The value of depth-first search or "bacltracking" as a technique for solving problems is illustra... more The value of depth-first search or "bacltracking" as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and ar algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by k 1V + k2E d-k for some constants kl, k2, and k a, where Vis the number of vertices and E is the number of edges of the graph being examined.

Research paper thumbnail of adobe actionscript compiler 20 release notes

Research paper thumbnail of Robust Bidirectional Search via Heuristic Improvement

Although the heuristic search algorithm A* is well-known to be optimally efficient, this result e... more Although the heuristic search algorithm A* is well-known to be optimally efficient, this result explicitly assumes forward search. Bidirectional search has long held promise for surpassing A*'s efficiency, and many varieties have been proposed, but it has proven difficult to achieve robust performance across multiple domains in practice. We introduce a simple bidirectional search technique called Incremental KKAdd that judiciously performs backward search to improve the accuracy of the forward heuristic function for any search algorithm. We integrate this technique with A*, assess its theoretical properties, and empirically evaluate its performance across seven benchmark domains. In the best case, it yields a factor of six reduction in node expansions and CPU time compared to A*, and in the worst case, its overhead is provably bounded by a user-supplied parameter, such as 1%. Viewing performance across all domains, it also surpasses previously proposed bidirectional search algorithms. These results indicate that Incremental KKAdd is a robust way to leverage bidirectional search in practice.

Research paper thumbnail of Iterative-Deepening Search with On-line Tree Size Prediction

The memory requirements of best-first graph search algorithms such as A* often prevent them from ... more The memory requirements of best-first graph search algorithms such as A* often prevent them from solving large problems. The best-known approach for coping with this issue is iterative deepening, which performs a series of bounded depth-first searches. Unfortunately, iterative deepening only performs well when successive cost bounds visit a geometrically increasing number of nodes. While it happens to work acceptably for the classic sliding tile puzzle, IDA* fails for many other domains. In this paper, we present an algorithm that adaptively chooses appropriate cost bounds on-line during search. During each iteration, it learns a model of the search tree that helps it to predict the bound to use next. Our search tree model has three main benefits over previous approaches: 1) it will work in domains with real-valued heuristic estimates, 2) it can be trained on-line, and 3) it is able to make predictions with only a small number of training examples. We demonstrate the power of our improved model by using it to control an iterative-deepening A* search on-line. While our technique has more overhead than previous methods for controlling iterative-deepening A*, it can give more robust performance by using its experience to accurately double the amount of search effort between iterations.

Research paper thumbnail of Index Term— Interative Deepining Bi-directional Heuristic Front-to-Front Algorithm (IDBHFFA), Bi-directional Heuristic Front-to-Front Algorithm (BHFFA), Bi-directional Depth-First Iterative Deepening (DFID) S earch, Bi-directional Heuristic Path

Artificial Intelligence (AI) is a subject that studies techniques for making computers exhibit in... more Artificial Intelligence (AI) is a subject that studies techniques for making computers exhibit intelligent behavior. S earching still remains one of the problem in AI. Bi -directional search is performed by searching simultaneously in forward direction from the initial node and in backward direction from the goal node. Bi-directional heuristic search algorithms need less time and space than their unidirectional versions. Bi -directional Heuristic Front to Front Algorithm (BHFFA) is one of the Bidirectional heuristic search algorithm. However, it has some disadvantages. It needs to store many unnecessary nodes prior to termination. Moreover, in large problem spaces the computational overhead for the selection of the next node to be expanded increases significantly. This paper presents a modification to the BHFFA called Iterative Deepening Bidirectional Heuristic Front-to-Front Algorithm (IDBHFFA) that has been analyzed and implemented using the 8-puzzle problem. The proposed algorithm performs BHFFA in a number of iterations. For each iteration, two thresholds are maintained, one for each search frontier. In each iteration, some non-promising nodes on a particular search frontier having cost estimates greater than the corresponding threshold value are pruned. The process continues with successive iterations with the thresholds increasing with each iteration. The proposed algorithm will return optimal solutions with an admissible heuristic function. It can minimize the computational time and memory space requirement of BHFFA considerably.

Research paper thumbnail of Distributed Memory Breadth-First Search Revisited: Enabling Bottom-Up Search

Breadth-first search (BFS) is a fundamental graph primitive frequently used as a building block f... more Breadth-first search (BFS) is a fundamental graph primitive frequently used as a building block for many complex graph algorithms. In the worst case, the complexity of BFS is linear in the number of edges and vertices, and the conventional top-down approach always takes as much time as the worst case. A recently discovered bottom-up approach manages to cut down the complexity all the way to the number of vertices in the best case, which is typically at least an order of magnitude less than the number of edges. The bottom-up approach is not always advantageous, so it is combined with the top-down approach to make the direction-optimizing algorithm which adaptively switches from top-down to bottom-up as the frontier expands. We present a scalable distributed-memory parallelization of this challenging algorithm and show up to an order of magnitude speedups compared to an earlier purely top-down code. Our approach also uses a 2D decomposition of the graph that has previously been shown to be superior to a 1D decomposition. Using the default parameters of the Graph500 benchmark, our new algorithm achieves a performance rate of over 240 billion edges per second on 115 thousand cores of a Cray XE6, which makes it over 7× faster than a conventional top-down algorithm using the same set of optimizations and data distribution.

Research paper thumbnail of Computing Breadth First Search in Large Graph Using hMetis Partitioning

In this paper, we present a new algorithm for the Breadth first search traversing based on hMetis... more In this paper, we present a new algorithm for the Breadth first search traversing based on hMetis partitioning. hMetis is a hyper graph partitioning algorithm that divides the massive graph into sub-graphs. Our heuristic approach of computing the breadth first search (h_BFS) starts at the partition that contains the source node then processing the other fragments piece by piece based on the border edges. Experimental results show that proposed algorithm is simpler and faster of traversing the large graph that does not fit in the memory based on the time and the number of I/O operations.

Research paper thumbnail of A Modified Multiple Depth First Search Algorithm for Grid Mapping Using Mini-Robots Khepera

Research paper thumbnail of DEPTH-FIRST SEARCH AND LINEAR GRAPH ALGORITHMS

The value of depth-first search or "bacltracking" as a technique for solving problems is illustra... more The value of depth-first search or "bacltracking" as a technique for solving problems is illustrated by two examples. An improved version of an algorithm for finding the strongly connected components of a directed graph and ar algorithm for finding the biconnected components of an undirect graph are presented. The space and time requirements of both algorithms are bounded by k 1V + k2E d-k for some constants kl, k2, and k a, where Vis the number of vertices and E is the number of edges of the graph being examined.

Research paper thumbnail of adobe actionscript compiler 20 release notes