Martin Fürer - Academia.edu (original) (raw)
Papers by Martin Fürer
Springer eBooks, 2015
An efficient algorithm is presented to compute the characteristic polynomial of a threshold graph... more An efficient algorithm is presented to compute the characteristic polynomial of a threshold graph. Threshold graphs were introduced by Chvátal and Hammer, as well as by Henderson and Zalcstein in 1977. A threshold graph is obtained from a one vertex graph by repeatedly adding either an isolated vertex or a dominating vertex, which is a vertex adjacent to all the other vertices. Threshold graphs are special kinds of cographs, which themselves are special kinds of graphs of clique-width 2. We obtain a running time of O(n log 2 n) for computing the characteristic polynomial, while the previously fastest algorithm ran in quadratic time.
arXiv (Cornell University), Jun 21, 2016
Tree-width and path-width are widely successful concepts. Many NP-hard problems have efficient so... more Tree-width and path-width are widely successful concepts. Many NP-hard problems have efficient solutions when restricted to graphs of bounded tree-width. Many efficient algorithms are based on a tree decomposition. Sometimes the more restricted path decomposition is required. The bottleneck for such algorithms is often the computation of the width and a corresponding tree or path decomposition. For graphs with n vertices and tree-width or path-width k, the standard linear time algorithm to compute these decompositions dates back to 1996. Its running time is linear in n and exponential in k 3 and not usable in practice. Here we present a more efficient algorithm to compute the path-width and provide a path decomposition. Its running time is 2 O(k 2) n. In the classical algorithm of Bodlaender and Kloks, the path decomposition is computed from a tree decomposition. Here, an optimal path decomposition is computed from a path decomposition of about twice the width. The latter is computed from a constant factor smaller graph.
Algorithmica, Sep 13, 2012
An O(nlog2n) algorithm is presented to compute all coefficients of the characteristic polynomial ... more An O(nlog2n) algorithm is presented to compute all coefficients of the characteristic polynomial of a tree on n vertices improving on the previously best quadratic time. With the same running time, the algorithm can be generalized in two directions. The algorithm is a counting algorithm for matchings, and the same ideas can be used to count other objects. For example, one can count the number of independent sets of all possible sizes simultaneously with the same running time. These counting algorithms not only work for trees, but can be extended to arbitrary graphs of bounded tree-width.
arXiv (Cornell University), Oct 6, 2020
We present a new approximation algorithm for the treewidth problem which finds an upper bound on ... more We present a new approximation algorithm for the treewidth problem which finds an upper bound on the treewidth and constructs a corresponding tree decomposition as well. Our algorithm is a faster variation of Reed's classical algorithm. For the benefit of the reader, and to be able to compare these two algorithms, we start with a detailed time analysis of Reed's algorithm. We fill in many details that have been omitted in Reed's paper. Computing tree decompositions parameterized by the treewidth k is fixed parameter tractable (FPT), meaning that there are algorithms running in time O(f (k)g(n)) where f is a computable function, and g(n) is polynomial in n, where n is the number of vertices. An analysis of Reed's algorithm shows f (k) = 2 O(k log k) and g(n) = n log n for a 5-approximation. Reed simply claims time O(n log n) for bounded k for his constant factor approximation algorithm, but the bound of 2 Ω(k log k) n log n is well known. From a practical point of view, we notice that the time of Reed's algorithm also contains a term of O(k 2 2 24k n log n), which for small k is much worse than the asymptotically leading term of 2 O(k log k) n log n. We analyze f (k) more precisely, because the purpose of this paper is to improve the running times for all reasonably small values of k. Our algorithm runs in O(f (k)n log n) too, but with a much smaller dependence on k. In our case, f (k) = 2 O(k). This algorithm is simple and fast, especially for small values of k. We should mention that Bodlaender et al. [2016] have an algorithm with a linear dependence on n, and Korhonen [2021] obtains the much better approximation ratio of 2, while the current paper achieves a better dependence on k.
Symposium on the Theory of Computing, 1987
Improving a result of Mehlhorn and Schmidt, a function f with deterministic communication complex... more Improving a result of Mehlhorn and Schmidt, a function f with deterministic communication complexity n2 is shown to have Las Vegas communication complexity &Ogr;(n). This is the best possible, because the deterministic complexity cannot be more than the square of the Las Vegas communication complexity for any function.
arXiv (Cornell University), Jun 10, 2009
The bandwidth of a graph G on n vertices is the minimum b such that the vertices of G can be labe... more The bandwidth of a graph G on n vertices is the minimum b such that the vertices of G can be labeled from 1 to n such that the labels of every pair of adjacent vertices differ by at most b. In this paper, we present a 2-approximation algorithm for the Bandwidth problem that takes worst-case O(1.9797 n) = O(3 0.6217n) time and uses polynomial space. This improves both the previous best 2-and 3-approximation algorithms of Cygan et al. which have O * (3 n) and O * (2 n) worst-case running time bounds, respectively. Our algorithm is based on constructing bucket decompositions of the input graph. A bucket decomposition partitions the vertex set of a graph into ordered sets (called buckets) of (almost) equal sizes such that all edges are either incident to vertices in the same bucket or to vertices in two consecutive buckets. The idea is to find the smallest bucket size for which there exists a bucket decomposition. The algorithm uses a divide-and-conquer strategy along with dynamic programming to achieve the improved time bound.
Springer eBooks, Aug 27, 2008
Let H be a graph, and let C H (G) be the number of (subgraph isomorphic) copies of H contained in... more Let H be a graph, and let C H (G) be the number of (subgraph isomorphic) copies of H contained in a graph G. We investigate the fundamental problem of estimating C H (G). Previous results cover only a few specific instances of this general problem, for example, the case when H has degree at most one (monomer-dimer problem). In this paper, we present the first general subcase of the subgraph isomorphism counting problem which is almost always efficiently approximable. The results rely on a new graph decomposition technique. Informally, the decomposition is a labeling of the vertices such that every edge is between vertices with different labels and for every vertex all neighbors with a higher label have identical labels. The labeling implicitly generates a sequence of bipartite graphs which permits us to break the problem of counting embeddings of large subgraphs into that of counting embeddings of small subgraphs. Using this method, we present a simple randomized algorithm for the counting problem. For all decomposable graphs H and all graphs G, the algorithm is an unbiased estimator. Furthermore, for all graphs H having a decomposition where each of the bipartite graphs generated is small and almost all graphs G, the algorithm is a fully polynomial randomized approximation scheme. We show that the graph classes of H for which we obtain a fully polynomial randomized approximation scheme for almost all G includes graphs of degree at most two, bounded-degree forests, boundedlength grid graphs, subdivision of bounded-degree graphs, and major subclasses of outerplanar graphs, series-parallel graphs and planar graphs, whereas unbounded-length grid graphs are excluded. Additionally, our general technique can easily be applied to proving many more similar results. * A preliminary version of this paper appeared in 12th International Workshop on Randomization and Computation (RANDOM 2008).
arXiv (Cornell University), Apr 4, 2017
The classical Weisfeiler-Lehman method WL[2] uses edge colors to produce a powerful graph invaria... more The classical Weisfeiler-Lehman method WL[2] uses edge colors to produce a powerful graph invariant. It is at least as powerful in its ability to distinguish non-isomorphic graphs as the most prominent algebraic graph invariants. It determines not only the spectrum of a graph, and the angles between standard basis vectors and the eigenspaces, but even the angles between projections of standard basis vectors into the eigenspaces. Here, we investigate the combinatorial power of WL[2]. For sufficiently large k, WL[k] determines all combinatorial properties of a graph. Many traditionally used combinatorial invariants are determined by WL[k] for small k. We focus on two fundamental invariants, the number of cycles Cp of length p, and the number of cliques Kp of size p. We show that WL[2] determines the number of cycles of lengths up to 6, but not those of length 8. Also, WL[2] does not determine the number of 4-cliques.
arXiv (Cornell University), Jan 21, 2019
An NP-hard graph problem may be intractable for general graphs but it could be efficiently solvab... more An NP-hard graph problem may be intractable for general graphs but it could be efficiently solvable using dynamic programming for graphs with bounded width (or depth or some other structural parameter). Dynamic programming is a well-known approach used for finding exact solutions for NP-hard graph problems based on tree decompositions. It has been shown that there exist algorithms using linear time in the number of vertices and single exponential time in the width (depth or other parameters) of a given tree decomposition for many connectivity problems. Employing dynamic programming on a tree decomposition usually uses exponential space. In 2010, Lokshtanov and Nederlof introduced an elegant framework to avoid exponential space by algebraization. Later, Fürer and Yu modified the framework in a way that even works when the underlying set is dynamic, thus applying it to tree decompositions. In this work, we design space-efficient algorithms to solve the Hamiltonian Cycle and the Traveling Salesman problems, using polynomial space while the time complexity is only slightly increased. This might be inevitable since we are reducing the space usage from an exponential amount (in dynamic programming solution) to polynomial. We give an algorithm to solve Hamiltonian cycle in time O((4w) d nM (n log n)) using O(dn log n) space, where M (r) is the time complexity to multiply two integers, each of which being represented by at most r bits. Then, we solve the more general Traveling Salesman problem in time O((4w) d poly(n)) using space O(Wdn log n), where w and d are the width and the depth of the given tree decomposition and W is the sum of weights. Furthermore, this algorithm counts the number of Hamiltonian Cycles.
Theoretical Computer Science, 2017
An efficient algorithm is presented to compute the characteristic polynomial of a threshold graph... more An efficient algorithm is presented to compute the characteristic polynomial of a threshold graph. Threshold graphs were introduced by Chvátal and Hammer, as well as by Henderson and Zalcstein in 1977. A threshold graph is obtained from a one vertex graph by repeatedly adding either an isolated vertex or a dominating vertex, which is a vertex adjacent to all the other vertices. Threshold graphs are special kinds of cographs, which themselves are special kinds of graphs of clique-width 2. We obtain a running time of O(n log 2 n) for computing the characteristic polynomial, while the previously fastest algorithm ran in quadratic time.
For more than 35 years, the fastest known method for integer multiplication has been the Schönhag... more For more than 35 years, the fastest known method for integer multiplication has been the Schönhage-Strassen algorithm running in time O(n log n log log n). Under certain restrictive conditions there is a corresponding Ω(n log n) lower bound. The prevailing conjecture has always been that the complexity of an optimal algorithm is Θ(n log n). We present a major step towards closing the gap from above by presenting an algorithm running in time n log n 2 O(log * n). The main result is for boolean circuits as well as for multitape Turing machines, but it has consequences to other models of computation as well.
arXiv (Cornell University), Oct 25, 2017
Finding a diagonal matrix congruent to A−cI for constants c, where A is the adjacency matrix of a... more Finding a diagonal matrix congruent to A−cI for constants c, where A is the adjacency matrix of a graph G allows us to quickly tell the number of eigenvalues in a given interval. If G has clique-width k and a corresponding k-expression is known, then diagonalization can be done in time O(poly (k)n) where n is the order of G.
arXiv (Cornell University), Nov 4, 2021
We define a notion called leftmost separator of size at most k. A leftmost separator of size k is... more We define a notion called leftmost separator of size at most k. A leftmost separator of size k is a minimal separator S that separates two given sets of vertices X and Y such that we "cannot move S more towards X" such that |S| remains smaller than the threshold. One of the incentives is that by using leftmost separators we can improve the time complexity of treewidth approximation. Treewidth approximation is a problem which is known to have a linear time FPT algorithm in terms of input size, and only single exponential in terms of the parameter, treewidth. It is not known whether this result can be improved theoretically. However, the coefficient of the parameter k (the treewidth) in the exponent is large. Hence, our goal is to decrease the coefficient of k in the exponent, in order to achieve a more practical algorithm. Hereby, we trade a linear-time algorithm for an O(n log n)-time algorithm. The previous known O(f (k)n log n)-time algorithms have dependences of 2 24k k!, 2 8.766k k 2 (a better analysis shows that it is 2 7.671k k 2), and higher. In this paper, we present an algorithm for treewidth approximation which runs in time O(2 6.755k n log n), Furthermore, we count the number of leftmost separators and give a tight upper bound for them. We show that the number of leftmost separators of size ≤ k is at most C k−1 (Catalan number). Then, we present an algorithm which outputs all leftmost separators in time O(4 k √ k n).
We define a powerful new approximation technique called semi-local optimization. It provides very... more We define a powerful new approximation technique called semi-local optimization. It provides very natural heuristics that are distinctly more powerful than those based on local optimization. With an appropriate metric, semi-local optimization can still be viewed as a local optimization, but it has the advantage of making global changes to an approximate solution. Semi-local optimization generalizes recent heuristics of Halld6rsson for 3-Set Cover, Color Saving, and k-Set Cover. Greatly improved performance ratios of 4/3 for 3-Set Cover and 6/5 for Color Saving in graphs without independent sets of size 4 are obtained and shown to be the best possible with semi-local optimization. Also, based on the result for 3-Set Cover and a restricted greedy phase for big sets, we can improve the performance ratio for k-Set Cover to~k-1/2. In Color Saving, when larger independent sets exist, we can improve the performance ratio to~.
arXiv (Cornell University), Sep 6, 2021
Let M = (mij) be a symmetric matrix of order n whose elements lie in an arbitrary field F, and le... more Let M = (mij) be a symmetric matrix of order n whose elements lie in an arbitrary field F, and let G be the graph with vertex set {1,. .. , n} such that distinct vertices i and j are adjacent if and only if mij = 0. We introduce a dynamic programming algorithm that finds a diagonal matrix that is congruent to M. If G is given with a tree decomposition T of width k, then this can be done in time O(k|T | + k 2 n), where |T | denotes the number of nodes in T. Among other things, this allows the computation of the determinant, the rank and the inertia of a symmetric matrix in time O(k|T | + k 2 n).
Springer eBooks, 2015
An efficient algorithm is presented to compute the characteristic polynomial of a threshold graph... more An efficient algorithm is presented to compute the characteristic polynomial of a threshold graph. Threshold graphs were introduced by Chvátal and Hammer, as well as by Henderson and Zalcstein in 1977. A threshold graph is obtained from a one vertex graph by repeatedly adding either an isolated vertex or a dominating vertex, which is a vertex adjacent to all the other vertices. Threshold graphs are special kinds of cographs, which themselves are special kinds of graphs of clique-width 2. We obtain a running time of O(n log 2 n) for computing the characteristic polynomial, while the previously fastest algorithm ran in quadratic time.
arXiv (Cornell University), Jun 21, 2016
Tree-width and path-width are widely successful concepts. Many NP-hard problems have efficient so... more Tree-width and path-width are widely successful concepts. Many NP-hard problems have efficient solutions when restricted to graphs of bounded tree-width. Many efficient algorithms are based on a tree decomposition. Sometimes the more restricted path decomposition is required. The bottleneck for such algorithms is often the computation of the width and a corresponding tree or path decomposition. For graphs with n vertices and tree-width or path-width k, the standard linear time algorithm to compute these decompositions dates back to 1996. Its running time is linear in n and exponential in k 3 and not usable in practice. Here we present a more efficient algorithm to compute the path-width and provide a path decomposition. Its running time is 2 O(k 2) n. In the classical algorithm of Bodlaender and Kloks, the path decomposition is computed from a tree decomposition. Here, an optimal path decomposition is computed from a path decomposition of about twice the width. The latter is computed from a constant factor smaller graph.
Algorithmica, Sep 13, 2012
An O(nlog2n) algorithm is presented to compute all coefficients of the characteristic polynomial ... more An O(nlog2n) algorithm is presented to compute all coefficients of the characteristic polynomial of a tree on n vertices improving on the previously best quadratic time. With the same running time, the algorithm can be generalized in two directions. The algorithm is a counting algorithm for matchings, and the same ideas can be used to count other objects. For example, one can count the number of independent sets of all possible sizes simultaneously with the same running time. These counting algorithms not only work for trees, but can be extended to arbitrary graphs of bounded tree-width.
arXiv (Cornell University), Oct 6, 2020
We present a new approximation algorithm for the treewidth problem which finds an upper bound on ... more We present a new approximation algorithm for the treewidth problem which finds an upper bound on the treewidth and constructs a corresponding tree decomposition as well. Our algorithm is a faster variation of Reed's classical algorithm. For the benefit of the reader, and to be able to compare these two algorithms, we start with a detailed time analysis of Reed's algorithm. We fill in many details that have been omitted in Reed's paper. Computing tree decompositions parameterized by the treewidth k is fixed parameter tractable (FPT), meaning that there are algorithms running in time O(f (k)g(n)) where f is a computable function, and g(n) is polynomial in n, where n is the number of vertices. An analysis of Reed's algorithm shows f (k) = 2 O(k log k) and g(n) = n log n for a 5-approximation. Reed simply claims time O(n log n) for bounded k for his constant factor approximation algorithm, but the bound of 2 Ω(k log k) n log n is well known. From a practical point of view, we notice that the time of Reed's algorithm also contains a term of O(k 2 2 24k n log n), which for small k is much worse than the asymptotically leading term of 2 O(k log k) n log n. We analyze f (k) more precisely, because the purpose of this paper is to improve the running times for all reasonably small values of k. Our algorithm runs in O(f (k)n log n) too, but with a much smaller dependence on k. In our case, f (k) = 2 O(k). This algorithm is simple and fast, especially for small values of k. We should mention that Bodlaender et al. [2016] have an algorithm with a linear dependence on n, and Korhonen [2021] obtains the much better approximation ratio of 2, while the current paper achieves a better dependence on k.
Symposium on the Theory of Computing, 1987
Improving a result of Mehlhorn and Schmidt, a function f with deterministic communication complex... more Improving a result of Mehlhorn and Schmidt, a function f with deterministic communication complexity n2 is shown to have Las Vegas communication complexity &Ogr;(n). This is the best possible, because the deterministic complexity cannot be more than the square of the Las Vegas communication complexity for any function.
arXiv (Cornell University), Jun 10, 2009
The bandwidth of a graph G on n vertices is the minimum b such that the vertices of G can be labe... more The bandwidth of a graph G on n vertices is the minimum b such that the vertices of G can be labeled from 1 to n such that the labels of every pair of adjacent vertices differ by at most b. In this paper, we present a 2-approximation algorithm for the Bandwidth problem that takes worst-case O(1.9797 n) = O(3 0.6217n) time and uses polynomial space. This improves both the previous best 2-and 3-approximation algorithms of Cygan et al. which have O * (3 n) and O * (2 n) worst-case running time bounds, respectively. Our algorithm is based on constructing bucket decompositions of the input graph. A bucket decomposition partitions the vertex set of a graph into ordered sets (called buckets) of (almost) equal sizes such that all edges are either incident to vertices in the same bucket or to vertices in two consecutive buckets. The idea is to find the smallest bucket size for which there exists a bucket decomposition. The algorithm uses a divide-and-conquer strategy along with dynamic programming to achieve the improved time bound.
Springer eBooks, Aug 27, 2008
Let H be a graph, and let C H (G) be the number of (subgraph isomorphic) copies of H contained in... more Let H be a graph, and let C H (G) be the number of (subgraph isomorphic) copies of H contained in a graph G. We investigate the fundamental problem of estimating C H (G). Previous results cover only a few specific instances of this general problem, for example, the case when H has degree at most one (monomer-dimer problem). In this paper, we present the first general subcase of the subgraph isomorphism counting problem which is almost always efficiently approximable. The results rely on a new graph decomposition technique. Informally, the decomposition is a labeling of the vertices such that every edge is between vertices with different labels and for every vertex all neighbors with a higher label have identical labels. The labeling implicitly generates a sequence of bipartite graphs which permits us to break the problem of counting embeddings of large subgraphs into that of counting embeddings of small subgraphs. Using this method, we present a simple randomized algorithm for the counting problem. For all decomposable graphs H and all graphs G, the algorithm is an unbiased estimator. Furthermore, for all graphs H having a decomposition where each of the bipartite graphs generated is small and almost all graphs G, the algorithm is a fully polynomial randomized approximation scheme. We show that the graph classes of H for which we obtain a fully polynomial randomized approximation scheme for almost all G includes graphs of degree at most two, bounded-degree forests, boundedlength grid graphs, subdivision of bounded-degree graphs, and major subclasses of outerplanar graphs, series-parallel graphs and planar graphs, whereas unbounded-length grid graphs are excluded. Additionally, our general technique can easily be applied to proving many more similar results. * A preliminary version of this paper appeared in 12th International Workshop on Randomization and Computation (RANDOM 2008).
arXiv (Cornell University), Apr 4, 2017
The classical Weisfeiler-Lehman method WL[2] uses edge colors to produce a powerful graph invaria... more The classical Weisfeiler-Lehman method WL[2] uses edge colors to produce a powerful graph invariant. It is at least as powerful in its ability to distinguish non-isomorphic graphs as the most prominent algebraic graph invariants. It determines not only the spectrum of a graph, and the angles between standard basis vectors and the eigenspaces, but even the angles between projections of standard basis vectors into the eigenspaces. Here, we investigate the combinatorial power of WL[2]. For sufficiently large k, WL[k] determines all combinatorial properties of a graph. Many traditionally used combinatorial invariants are determined by WL[k] for small k. We focus on two fundamental invariants, the number of cycles Cp of length p, and the number of cliques Kp of size p. We show that WL[2] determines the number of cycles of lengths up to 6, but not those of length 8. Also, WL[2] does not determine the number of 4-cliques.
arXiv (Cornell University), Jan 21, 2019
An NP-hard graph problem may be intractable for general graphs but it could be efficiently solvab... more An NP-hard graph problem may be intractable for general graphs but it could be efficiently solvable using dynamic programming for graphs with bounded width (or depth or some other structural parameter). Dynamic programming is a well-known approach used for finding exact solutions for NP-hard graph problems based on tree decompositions. It has been shown that there exist algorithms using linear time in the number of vertices and single exponential time in the width (depth or other parameters) of a given tree decomposition for many connectivity problems. Employing dynamic programming on a tree decomposition usually uses exponential space. In 2010, Lokshtanov and Nederlof introduced an elegant framework to avoid exponential space by algebraization. Later, Fürer and Yu modified the framework in a way that even works when the underlying set is dynamic, thus applying it to tree decompositions. In this work, we design space-efficient algorithms to solve the Hamiltonian Cycle and the Traveling Salesman problems, using polynomial space while the time complexity is only slightly increased. This might be inevitable since we are reducing the space usage from an exponential amount (in dynamic programming solution) to polynomial. We give an algorithm to solve Hamiltonian cycle in time O((4w) d nM (n log n)) using O(dn log n) space, where M (r) is the time complexity to multiply two integers, each of which being represented by at most r bits. Then, we solve the more general Traveling Salesman problem in time O((4w) d poly(n)) using space O(Wdn log n), where w and d are the width and the depth of the given tree decomposition and W is the sum of weights. Furthermore, this algorithm counts the number of Hamiltonian Cycles.
Theoretical Computer Science, 2017
An efficient algorithm is presented to compute the characteristic polynomial of a threshold graph... more An efficient algorithm is presented to compute the characteristic polynomial of a threshold graph. Threshold graphs were introduced by Chvátal and Hammer, as well as by Henderson and Zalcstein in 1977. A threshold graph is obtained from a one vertex graph by repeatedly adding either an isolated vertex or a dominating vertex, which is a vertex adjacent to all the other vertices. Threshold graphs are special kinds of cographs, which themselves are special kinds of graphs of clique-width 2. We obtain a running time of O(n log 2 n) for computing the characteristic polynomial, while the previously fastest algorithm ran in quadratic time.
For more than 35 years, the fastest known method for integer multiplication has been the Schönhag... more For more than 35 years, the fastest known method for integer multiplication has been the Schönhage-Strassen algorithm running in time O(n log n log log n). Under certain restrictive conditions there is a corresponding Ω(n log n) lower bound. The prevailing conjecture has always been that the complexity of an optimal algorithm is Θ(n log n). We present a major step towards closing the gap from above by presenting an algorithm running in time n log n 2 O(log * n). The main result is for boolean circuits as well as for multitape Turing machines, but it has consequences to other models of computation as well.
arXiv (Cornell University), Oct 25, 2017
Finding a diagonal matrix congruent to A−cI for constants c, where A is the adjacency matrix of a... more Finding a diagonal matrix congruent to A−cI for constants c, where A is the adjacency matrix of a graph G allows us to quickly tell the number of eigenvalues in a given interval. If G has clique-width k and a corresponding k-expression is known, then diagonalization can be done in time O(poly (k)n) where n is the order of G.
arXiv (Cornell University), Nov 4, 2021
We define a notion called leftmost separator of size at most k. A leftmost separator of size k is... more We define a notion called leftmost separator of size at most k. A leftmost separator of size k is a minimal separator S that separates two given sets of vertices X and Y such that we "cannot move S more towards X" such that |S| remains smaller than the threshold. One of the incentives is that by using leftmost separators we can improve the time complexity of treewidth approximation. Treewidth approximation is a problem which is known to have a linear time FPT algorithm in terms of input size, and only single exponential in terms of the parameter, treewidth. It is not known whether this result can be improved theoretically. However, the coefficient of the parameter k (the treewidth) in the exponent is large. Hence, our goal is to decrease the coefficient of k in the exponent, in order to achieve a more practical algorithm. Hereby, we trade a linear-time algorithm for an O(n log n)-time algorithm. The previous known O(f (k)n log n)-time algorithms have dependences of 2 24k k!, 2 8.766k k 2 (a better analysis shows that it is 2 7.671k k 2), and higher. In this paper, we present an algorithm for treewidth approximation which runs in time O(2 6.755k n log n), Furthermore, we count the number of leftmost separators and give a tight upper bound for them. We show that the number of leftmost separators of size ≤ k is at most C k−1 (Catalan number). Then, we present an algorithm which outputs all leftmost separators in time O(4 k √ k n).
We define a powerful new approximation technique called semi-local optimization. It provides very... more We define a powerful new approximation technique called semi-local optimization. It provides very natural heuristics that are distinctly more powerful than those based on local optimization. With an appropriate metric, semi-local optimization can still be viewed as a local optimization, but it has the advantage of making global changes to an approximate solution. Semi-local optimization generalizes recent heuristics of Halld6rsson for 3-Set Cover, Color Saving, and k-Set Cover. Greatly improved performance ratios of 4/3 for 3-Set Cover and 6/5 for Color Saving in graphs without independent sets of size 4 are obtained and shown to be the best possible with semi-local optimization. Also, based on the result for 3-Set Cover and a restricted greedy phase for big sets, we can improve the performance ratio for k-Set Cover to~k-1/2. In Color Saving, when larger independent sets exist, we can improve the performance ratio to~.
arXiv (Cornell University), Sep 6, 2021
Let M = (mij) be a symmetric matrix of order n whose elements lie in an arbitrary field F, and le... more Let M = (mij) be a symmetric matrix of order n whose elements lie in an arbitrary field F, and let G be the graph with vertex set {1,. .. , n} such that distinct vertices i and j are adjacent if and only if mij = 0. We introduce a dynamic programming algorithm that finds a diagonal matrix that is congruent to M. If G is given with a tree decomposition T of width k, then this can be done in time O(k|T | + k 2 n), where |T | denotes the number of nodes in T. Among other things, this allows the computation of the determinant, the rank and the inertia of a symmetric matrix in time O(k|T | + k 2 n).