Nicolas Brisebarre - Academia.edu (original) (raw)

Uploads

Papers by Nicolas Brisebarre

Research paper thumbnail of Augmented Precision Square Roots and 2-D Norms, and Discussion on Correctly Rounding sqrt (x^ 2+ y^ 2)

Research paper thumbnail of (M, p, k)-Friendly Points: A Table-based Method to Evaluate Trigonometric Function

ABSTRACT We present a new way of approximating the sine and cosine functions by a few table look-... more ABSTRACT We present a new way of approximating the sine and cosine functions by a few table look-ups and additions. It consists in first reducing the input range to a very small interval by using rotations with "(M, p, k) friendly angles", proposed in this work, and then by using a bipartite table method ina small interval. An implementation of the method for 24-bit case is described and compared with CORDIC. Roughly, the proposed scheme offers a speedup of 2 compared with an unfolded double-rotation radix-2 CORDIC.

Research paper thumbnail of Chebyshev interpolation polynomial-based tools for rigorous computing

Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation - ISSAC '10, 2010

Research paper thumbnail of Languages and Compilers

Handbook of Floating-Point Arithmetic, 2009

The previous chapters have given an overview of interesting properties and algorithms that can be... more The previous chapters have given an overview of interesting properties and algorithms that can be built on IEEE 754-compliant floating-point arithmetic. In this chapter, we discuss the practical issues encountered when trying to implement such algorithms in actual computers using actual programming languages. In particular, we discuss the relationship between standard compliance, portability, accuracy, and performance. This chapter is useful

Research paper thumbnail of Definitions and Basic Notions

Handbook of Floating-Point Arithmetic, 2009

As said in the Introduction, roughly speaking, a radix-β floating-point number x is a number of t... more As said in the Introduction, roughly speaking, a radix-β floating-point number x is a number of the form m · β e , where β is the radix of the floating-point system, m such that |m| is the significand of x, and e is its exponent. And yet, portability, accuracy, and the ability to prove interesting and useful properties as

Research paper thumbnail of Algorithms for the Five Basic Operations

Handbook of Floating-Point Arithmetic, 2009

Among the many operations that the IEEE 754 standards specify (see Chapter 3), we will focus here... more Among the many operations that the IEEE 754 standards specify (see Chapter 3), we will focus here and in the next two chapters on the five basic arithmetic operations: addition, subtraction, multiplication, division, and square root. We will also study the fused multiply-add (FMA) operator. We review here some of the known properties and algorithms used to implement each of

Research paper thumbnail of Hardware Implementation of Floating-Point Arithmetic

Handbook of Floating-Point Arithmetic, 2009

The previous chapter has shown that operations on floating-point numbers are naturally expressed ... more The previous chapter has shown that operations on floating-point numbers are naturally expressed in terms of integer or fixed-point operations on the significand and the exponent. For instance, to obtain the product of two floating-point numbers, one basically multiplies the significands and adds the exponents. However, obtaining the correct rounding of the result may require considerable design effort and the

Research paper thumbnail of Evaluating Floating-Point Elementary Functions

Handbook of Floating-Point Arithmetic, 2009

The elementary functions are the most common mathematical functions: sine, cosine, tangent and th... more The elementary functions are the most common mathematical functions: sine, cosine, tangent and their inverses, exponentials and logarithms of radices e, 2 or 10, etc. They appear everywhere in scientific computing; thus being able to evaluate them quickly and accurately is important for many applications. Various very different methods have been used for evaluating them: polynomial or rational approximations, shift-and-add

Research paper thumbnail of A floating-point library for integer processors

This paper presents a C library for the software support of single precision floating-point (FP) ... more This paper presents a C library for the software support of single precision floating-point (FP) arithmetic on processors without FP hardware units such as VLIW or DSP processor cores for embedded applications. This library provides several levels of compliance to the IEEE 754 FP standard. The complete specifications of the standard can be used or just some relaxed characteristics such as restricted rounding modes or computations without denormal numbers. This library is evaluated on the ST200 VLIW processors from STMicroelectronics.

Research paper thumbnail of Enhanced Floating-Point Sums, Dot Products, and Polynomial Values

In this chapter, we focus on the computation of sums and dot products, and on the evaluation of p... more In this chapter, we focus on the computation of sums and dot products, and on the evaluation of polynomials in IEEE 754 floating-point arithmetic.1 Such calculations arise in many fields of numerical computing. Computing sums is required, e.g., in numerical integration and the computation of means and variances. Dot products appear everywhere in numerical linear algebra. Polynomials are used to approximate many functions (see Chapter 11).

Research paper thumbnail of Supplementary material to "Accelerating Correctly Rounded Floating-Point Division When the Divisor is Known in Advance

We call a breakpoint a value z where the rounding changes, that is, if t1 and t2 are real numbers... more We call a breakpoint a value z where the rounding changes, that is, if t1 and t2 are real numbers satisfying t1< z< t2 and◦ t is the rounding mode, then◦ t (t1)<◦ t (t2). For “directed” rounding modes (ie, towards+∞,−∞ or 0), the breakpoints are the FP numbers. For ...

Research paper thumbnail of Algorithms and Arithmetic Operators for Computing the T Pairing in Characteristic Three - Appendices

Research paper thumbnail of A Comparison between Hardware Accelerators for the Modified Tate Pairing over\ mathbbF 2 m F _ 2^ m and\ mathbbF 3 m F _ 3^ m

Research paper thumbnail of Augmented Precision Square Roots and 2-D Norms, and Discussion on Correctly Rounding sqrt(x^2+y^2)

2011 IEEE 20th Symposium on Computer Arithmetic, 2011

Research paper thumbnail of Comparison between binary64 and decimal64 floating-point numbers

Research paper thumbnail of Correct rounding of algebraic functions

Research paper thumbnail of Rigorous Polynomial Approximation using Taylor Models in Coq

Research paper thumbnail of Correctly Rounded Multiplication by Arbitrary Precision Constants

Research paper thumbnail of Hardware operators for function evaluation using sparse-coefficient polynomials

Electronics Letters, 2006

Research paper thumbnail of Augmented Precision Square Roots and 2-D Norms, and Discussion on Correctly Rounding sqrt (x^ 2+ y^ 2)

Research paper thumbnail of (M, p, k)-Friendly Points: A Table-based Method to Evaluate Trigonometric Function

ABSTRACT We present a new way of approximating the sine and cosine functions by a few table look-... more ABSTRACT We present a new way of approximating the sine and cosine functions by a few table look-ups and additions. It consists in first reducing the input range to a very small interval by using rotations with &quot;(M, p, k) friendly angles&quot;, proposed in this work, and then by using a bipartite table method ina small interval. An implementation of the method for 24-bit case is described and compared with CORDIC. Roughly, the proposed scheme offers a speedup of 2 compared with an unfolded double-rotation radix-2 CORDIC.

Research paper thumbnail of Chebyshev interpolation polynomial-based tools for rigorous computing

Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation - ISSAC '10, 2010

Research paper thumbnail of Languages and Compilers

Handbook of Floating-Point Arithmetic, 2009

The previous chapters have given an overview of interesting properties and algorithms that can be... more The previous chapters have given an overview of interesting properties and algorithms that can be built on IEEE 754-compliant floating-point arithmetic. In this chapter, we discuss the practical issues encountered when trying to implement such algorithms in actual computers using actual programming languages. In particular, we discuss the relationship between standard compliance, portability, accuracy, and performance. This chapter is useful

Research paper thumbnail of Definitions and Basic Notions

Handbook of Floating-Point Arithmetic, 2009

As said in the Introduction, roughly speaking, a radix-β floating-point number x is a number of t... more As said in the Introduction, roughly speaking, a radix-β floating-point number x is a number of the form m · β e , where β is the radix of the floating-point system, m such that |m| is the significand of x, and e is its exponent. And yet, portability, accuracy, and the ability to prove interesting and useful properties as

Research paper thumbnail of Algorithms for the Five Basic Operations

Handbook of Floating-Point Arithmetic, 2009

Among the many operations that the IEEE 754 standards specify (see Chapter 3), we will focus here... more Among the many operations that the IEEE 754 standards specify (see Chapter 3), we will focus here and in the next two chapters on the five basic arithmetic operations: addition, subtraction, multiplication, division, and square root. We will also study the fused multiply-add (FMA) operator. We review here some of the known properties and algorithms used to implement each of

Research paper thumbnail of Hardware Implementation of Floating-Point Arithmetic

Handbook of Floating-Point Arithmetic, 2009

The previous chapter has shown that operations on floating-point numbers are naturally expressed ... more The previous chapter has shown that operations on floating-point numbers are naturally expressed in terms of integer or fixed-point operations on the significand and the exponent. For instance, to obtain the product of two floating-point numbers, one basically multiplies the significands and adds the exponents. However, obtaining the correct rounding of the result may require considerable design effort and the

Research paper thumbnail of Evaluating Floating-Point Elementary Functions

Handbook of Floating-Point Arithmetic, 2009

The elementary functions are the most common mathematical functions: sine, cosine, tangent and th... more The elementary functions are the most common mathematical functions: sine, cosine, tangent and their inverses, exponentials and logarithms of radices e, 2 or 10, etc. They appear everywhere in scientific computing; thus being able to evaluate them quickly and accurately is important for many applications. Various very different methods have been used for evaluating them: polynomial or rational approximations, shift-and-add

Research paper thumbnail of A floating-point library for integer processors

This paper presents a C library for the software support of single precision floating-point (FP) ... more This paper presents a C library for the software support of single precision floating-point (FP) arithmetic on processors without FP hardware units such as VLIW or DSP processor cores for embedded applications. This library provides several levels of compliance to the IEEE 754 FP standard. The complete specifications of the standard can be used or just some relaxed characteristics such as restricted rounding modes or computations without denormal numbers. This library is evaluated on the ST200 VLIW processors from STMicroelectronics.

Research paper thumbnail of Enhanced Floating-Point Sums, Dot Products, and Polynomial Values

In this chapter, we focus on the computation of sums and dot products, and on the evaluation of p... more In this chapter, we focus on the computation of sums and dot products, and on the evaluation of polynomials in IEEE 754 floating-point arithmetic.1 Such calculations arise in many fields of numerical computing. Computing sums is required, e.g., in numerical integration and the computation of means and variances. Dot products appear everywhere in numerical linear algebra. Polynomials are used to approximate many functions (see Chapter 11).

Research paper thumbnail of Supplementary material to "Accelerating Correctly Rounded Floating-Point Division When the Divisor is Known in Advance

We call a breakpoint a value z where the rounding changes, that is, if t1 and t2 are real numbers... more We call a breakpoint a value z where the rounding changes, that is, if t1 and t2 are real numbers satisfying t1< z< t2 and◦ t is the rounding mode, then◦ t (t1)<◦ t (t2). For “directed” rounding modes (ie, towards+∞,−∞ or 0), the breakpoints are the FP numbers. For ...

Research paper thumbnail of Algorithms and Arithmetic Operators for Computing the T Pairing in Characteristic Three - Appendices

Research paper thumbnail of A Comparison between Hardware Accelerators for the Modified Tate Pairing over\ mathbbF 2 m F _ 2^ m and\ mathbbF 3 m F _ 3^ m

Research paper thumbnail of Augmented Precision Square Roots and 2-D Norms, and Discussion on Correctly Rounding sqrt(x^2+y^2)

2011 IEEE 20th Symposium on Computer Arithmetic, 2011

Research paper thumbnail of Comparison between binary64 and decimal64 floating-point numbers

Research paper thumbnail of Correct rounding of algebraic functions

Research paper thumbnail of Rigorous Polynomial Approximation using Taylor Models in Coq

Research paper thumbnail of Correctly Rounded Multiplication by Arbitrary Precision Constants

Research paper thumbnail of Hardware operators for function evaluation using sparse-coefficient polynomials

Electronics Letters, 2006