Algorithms for the non-monic case of the sparse modular GCD algorithm (original) (raw)

A sparse modular GCD algorithm for polynomials over algebraic function fields

Proceedings of the 2007 international symposium on Symbolic and algebraic computation - ISSAC '07, 2007

We present a first sparse modular algorithm for computing a greatest common divisor of two polynomials f1, f2 ∈ L[x] where L is an algebraic function field in k ≥ 0 parameters with r ≥ 0 field extensions. Our algorithm extends the dense algorithm of Monagan and van Hoeij from 2004 to support multiple field extensions and to be efficient when the gcd is sparse. Our algorithm is an output sensitive Las Vegas algorithm. We have implemented our algorithm in Maple. We provide timings demonstrating the efficiency of our algorithm compared to that of Monagan and van Hoeij and with a primitive fraction-free Euclidean algorithm for both dense and sparse gcd problems.

Three new algorithms for multivariate polynomial GCD

Journal of Symbolic Computation, 1992

Three new algorithms for multivariate polynomial GCD (greatest common divisor) are given. The first is to calculate a GrSbner basis with a certain term ordering. The second is to calculate the subresultant by treating the coefficients w.r.t, the main variable as truncated power series. The third is to calculate a PRS (polynomial remainder sequence) by treating the coefficients as truncated power series. The first algorithm is not important praetioaUy, but the second and third ones are efficient and seem to be useful practically. The third algorithm has been implemented naively and compared with the trial-division PRS algorithm and the EZGCD algorithm. Although it is too early to derive a definite conclusion, the PRS method with power series coefficients is very efficient for calculating low degree GCD of high degree non-sparse polynomials.

GPGCD: An iterative method for calculating approximate GCD of univariate polynomials

Theoretical Computer Science, 2013

We present an iterative algorithm for calculating approximate greatest common divisor (GCD) of univariate polynomials with the real or the complex coefficients. For a given pair of polynomials and a degree, our algorithm finds a pair of polynomials which has a GCD of the given degree and whose coefficients are perturbed from those in the original inputs, making the perturbations as small as possible, along with the GCD. The problem of approximate GCD is transfered to a constrained minimization problem, then solved with the so-called modified Newton method, which is a generalization of the gradient-projection method, by searching the solution iteratively. We demonstrate that, in some test cases, our algorithm calculates approximate GCD with perturbations as small as those calculated by a method based on the structured total least norm (STLN) method and the UVGCD method, while our method runs significantly faster than theirs by approximately up to 30 or 10 times, respectively, compared with their implementation. We also show that our algorithm properly handles some ill-conditioned polynomials which have a GCD with small or large leading coefficient.

GPGCD, an iterative method for calculating approximate GCD, for multiple univariate polynomials

ACM SIGSAM Bulletin, 2011

We present an iterative algorithm for calculating approximate greatest common divisor (GCD) of univariate polynomials with the real or the complex coefficients. For a given pair of polynomials and a degree, our algorithm finds a pair of polynomials which has a GCD of the given degree and whose coefficients are perturbed from those in the original inputs, making the perturbations as small as possible, along with the GCD. The problem of approximate GCD is transfered to a constrained minimization problem, then solved with the so-called modified Newton method, which is a generalization of the gradient-projection method, by searching the solution iteratively. We demonstrate that, in some test cases, our algorithm calculates approximate GCD with perturbations as small as those calculated by a method based on the structured total least norm (STLN) method and the UVGCD method, while our method runs significantly faster than theirs by approximately up to 30 or 10 times, respectively, compared with their implementation. We also show that our algorithm properly handles some ill-conditioned polynomials which have a GCD with small or large leading coefficient.

On the genericity of the modular polynomial GCD algorithm

1999

In this paper we st,udy the generic setting of the modular GCD algorithm. We develop the algorithm for multivariate polynomials over Euclidean domains which have a spc:&l kind of remainder function. Details for the parameterixation and generic Maple code are given. Applying this grncric algorithm to a GCD problem in Z/(~)[t][z] where 1~ is small yields an improved asymptotic performance over t.he usual approach, and a very practical algorithm for polynomials over small finite fields. 'This nxderial is bawd on work support,ed in part, by t.hr National Science Yound;rLion under C:rmlt X0. C!CR-9712267 (Erich Kalt.ofen) and on work support,ed by NSEHC: of C!anacla (Michael Mormgan 1. Pcrnlission 1.0 make digit,al or hard copies of ,211 or part. of this work for personal or classroo~n use is granted wit.hout fee providctl that copies are not. made or dist.rilmtcd for prolit or commercial aclvant.age, arid that. ropies bear this ndice and the full cit.ation 011 the first. page. To c'opy othrrwise, to republish, to post on sc?rvers i)r to rcdist.ribute t.0 lists. requires prior specific permission and/or a fee. ISSAC' 99.

Using an efficient sparse minor expansion algorithm to compute polynomial subresultants and GCD

In this paper, the use of an efficient sparse minor expansion method to directly compute the subresultants needed for the greatest common denominator (GCD) of two polynomials is described. The sparse minor expansion method (applied either to Sylvester's or Bezout's matrix) naturally computes the coefficients of the subresultants in the order corresponding to a polynomial remainder sequence (PRS), avoiding wasteful recomputation as much as possible. It is suggested that this is an efficient method to compute the resultant and GCD of sparse polynomials.

Four New Algorithms for Multivariate Polynomial GCD

2016

Four new algorithms for multivariate polynomial GCD (greatest common divisor) are given. The first is a simple improvement of PRS (polynomial remainder sequence) algorithms. The second is to calculate a Groebner basis with a certain term ordering. The third is to calculate subresultant by treating the coefficients as truncated power series. The fourth is to calculate PRS by treating the coefficients as truncated power series. The first and second algorithms are not important practically, but the third and fourth ones are quite efficient and seem to be useful practi-cally. 1.

On the multi-threaded computation of integral polynomial greatest common divisors

Proceedings of the 1991 international symposium on Symbolic and algebraic computation - ISSAC '91, 1991

We present two algorithms for interpolating sparse rational functions. The first is the interpolation algorithm in a sense of sparse partial fraction representation of rational functions. The second is the algorithm for computing the entier and the remainder of a rational function. The first algorithm works without apriori known bound on the degree of a rational function, the second one is in the parallel class NC provided that the degree is known. The presented algorithms complement the sparse interpolation results of [GKS 90].

An alternating projection algorithm for the “approximate” GCD calculation

IFAC-PapersOnLine

In the paper an approach is proposed for calculating the "best" approximate GCD of a set of coprime polynomials. The algorithm is motivated by the factorisation of the Sylvester resultant matrix of polynomial sets with nontrivial GCD. In the (generic) case of coprime polynomial sets considered here the aim is to minimise the norm of the residual error matrix of the inexact factorisation in order to compute the "optimal" approximate GCD. A least-squares alternating projection algorithm is proposed as an alternative to the solution of the corresponding optimisation problem via nonlinear programming techniques. The special structure of the problem in this case, however, means that the algorithm can be reduced to a sequence of standard subspace projections and hence no need arises to compute gradient vectors, Hessian matrices or optimal step-lengths. An estimate of the asymptotic convergence rate of the algorithm is finally established via the inclination of two subspaces.