Nicolas Brisebarre - Academia.edu (original) (raw)
Uploads
Papers by Nicolas Brisebarre
ABSTRACT We present a new way of approximating the sine and cosine functions by a few table look-... more ABSTRACT We present a new way of approximating the sine and cosine functions by a few table look-ups and additions. It consists in first reducing the input range to a very small interval by using rotations with "(M, p, k) friendly angles", proposed in this work, and then by using a bipartite table method ina small interval. An implementation of the method for 24-bit case is described and compared with CORDIC. Roughly, the proposed scheme offers a speedup of 2 compared with an unfolded double-rotation radix-2 CORDIC.
Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation - ISSAC '10, 2010
Handbook of Floating-Point Arithmetic, 2009
The previous chapters have given an overview of interesting properties and algorithms that can be... more The previous chapters have given an overview of interesting properties and algorithms that can be built on IEEE 754-compliant floating-point arithmetic. In this chapter, we discuss the practical issues encountered when trying to implement such algorithms in actual computers using actual programming languages. In particular, we discuss the relationship between standard compliance, portability, accuracy, and performance. This chapter is useful
Handbook of Floating-Point Arithmetic, 2009
As said in the Introduction, roughly speaking, a radix-β floating-point number x is a number of t... more As said in the Introduction, roughly speaking, a radix-β floating-point number x is a number of the form m · β e , where β is the radix of the floating-point system, m such that |m| is the significand of x, and e is its exponent. And yet, portability, accuracy, and the ability to prove interesting and useful properties as
Handbook of Floating-Point Arithmetic, 2009
Among the many operations that the IEEE 754 standards specify (see Chapter 3), we will focus here... more Among the many operations that the IEEE 754 standards specify (see Chapter 3), we will focus here and in the next two chapters on the five basic arithmetic operations: addition, subtraction, multiplication, division, and square root. We will also study the fused multiply-add (FMA) operator. We review here some of the known properties and algorithms used to implement each of
Handbook of Floating-Point Arithmetic, 2009
The previous chapter has shown that operations on floating-point numbers are naturally expressed ... more The previous chapter has shown that operations on floating-point numbers are naturally expressed in terms of integer or fixed-point operations on the significand and the exponent. For instance, to obtain the product of two floating-point numbers, one basically multiplies the significands and adds the exponents. However, obtaining the correct rounding of the result may require considerable design effort and the
Handbook of Floating-Point Arithmetic, 2009
The elementary functions are the most common mathematical functions: sine, cosine, tangent and th... more The elementary functions are the most common mathematical functions: sine, cosine, tangent and their inverses, exponentials and logarithms of radices e, 2 or 10, etc. They appear everywhere in scientific computing; thus being able to evaluate them quickly and accurately is important for many applications. Various very different methods have been used for evaluating them: polynomial or rational approximations, shift-and-add
This paper presents a C library for the software support of single precision floating-point (FP) ... more This paper presents a C library for the software support of single precision floating-point (FP) arithmetic on processors without FP hardware units such as VLIW or DSP processor cores for embedded applications. This library provides several levels of compliance to the IEEE 754 FP standard. The complete specifications of the standard can be used or just some relaxed characteristics such as restricted rounding modes or computations without denormal numbers. This library is evaluated on the ST200 VLIW processors from STMicroelectronics.
In this chapter, we focus on the computation of sums and dot products, and on the evaluation of p... more In this chapter, we focus on the computation of sums and dot products, and on the evaluation of polynomials in IEEE 754 floating-point arithmetic.1 Such calculations arise in many fields of numerical computing. Computing sums is required, e.g., in numerical integration and the computation of means and variances. Dot products appear everywhere in numerical linear algebra. Polynomials are used to approximate many functions (see Chapter 11).
We call a breakpoint a value z where the rounding changes, that is, if t1 and t2 are real numbers... more We call a breakpoint a value z where the rounding changes, that is, if t1 and t2 are real numbers satisfying t1< z< t2 and◦ t is the rounding mode, then◦ t (t1)<◦ t (t2). For directed rounding modes (ie, towards+∞,−∞ or 0), the breakpoints are the FP numbers. For ...
2011 IEEE 20th Symposium on Computer Arithmetic, 2011
Electronics Letters, 2006
ABSTRACT We present a new way of approximating the sine and cosine functions by a few table look-... more ABSTRACT We present a new way of approximating the sine and cosine functions by a few table look-ups and additions. It consists in first reducing the input range to a very small interval by using rotations with "(M, p, k) friendly angles", proposed in this work, and then by using a bipartite table method ina small interval. An implementation of the method for 24-bit case is described and compared with CORDIC. Roughly, the proposed scheme offers a speedup of 2 compared with an unfolded double-rotation radix-2 CORDIC.
Proceedings of the 2010 International Symposium on Symbolic and Algebraic Computation - ISSAC '10, 2010
Handbook of Floating-Point Arithmetic, 2009
The previous chapters have given an overview of interesting properties and algorithms that can be... more The previous chapters have given an overview of interesting properties and algorithms that can be built on IEEE 754-compliant floating-point arithmetic. In this chapter, we discuss the practical issues encountered when trying to implement such algorithms in actual computers using actual programming languages. In particular, we discuss the relationship between standard compliance, portability, accuracy, and performance. This chapter is useful
Handbook of Floating-Point Arithmetic, 2009
As said in the Introduction, roughly speaking, a radix-β floating-point number x is a number of t... more As said in the Introduction, roughly speaking, a radix-β floating-point number x is a number of the form m · β e , where β is the radix of the floating-point system, m such that |m| is the significand of x, and e is its exponent. And yet, portability, accuracy, and the ability to prove interesting and useful properties as
Handbook of Floating-Point Arithmetic, 2009
Among the many operations that the IEEE 754 standards specify (see Chapter 3), we will focus here... more Among the many operations that the IEEE 754 standards specify (see Chapter 3), we will focus here and in the next two chapters on the five basic arithmetic operations: addition, subtraction, multiplication, division, and square root. We will also study the fused multiply-add (FMA) operator. We review here some of the known properties and algorithms used to implement each of
Handbook of Floating-Point Arithmetic, 2009
The previous chapter has shown that operations on floating-point numbers are naturally expressed ... more The previous chapter has shown that operations on floating-point numbers are naturally expressed in terms of integer or fixed-point operations on the significand and the exponent. For instance, to obtain the product of two floating-point numbers, one basically multiplies the significands and adds the exponents. However, obtaining the correct rounding of the result may require considerable design effort and the
Handbook of Floating-Point Arithmetic, 2009
The elementary functions are the most common mathematical functions: sine, cosine, tangent and th... more The elementary functions are the most common mathematical functions: sine, cosine, tangent and their inverses, exponentials and logarithms of radices e, 2 or 10, etc. They appear everywhere in scientific computing; thus being able to evaluate them quickly and accurately is important for many applications. Various very different methods have been used for evaluating them: polynomial or rational approximations, shift-and-add
This paper presents a C library for the software support of single precision floating-point (FP) ... more This paper presents a C library for the software support of single precision floating-point (FP) arithmetic on processors without FP hardware units such as VLIW or DSP processor cores for embedded applications. This library provides several levels of compliance to the IEEE 754 FP standard. The complete specifications of the standard can be used or just some relaxed characteristics such as restricted rounding modes or computations without denormal numbers. This library is evaluated on the ST200 VLIW processors from STMicroelectronics.
In this chapter, we focus on the computation of sums and dot products, and on the evaluation of p... more In this chapter, we focus on the computation of sums and dot products, and on the evaluation of polynomials in IEEE 754 floating-point arithmetic.1 Such calculations arise in many fields of numerical computing. Computing sums is required, e.g., in numerical integration and the computation of means and variances. Dot products appear everywhere in numerical linear algebra. Polynomials are used to approximate many functions (see Chapter 11).
We call a breakpoint a value z where the rounding changes, that is, if t1 and t2 are real numbers... more We call a breakpoint a value z where the rounding changes, that is, if t1 and t2 are real numbers satisfying t1< z< t2 and◦ t is the rounding mode, then◦ t (t1)<◦ t (t2). For directed rounding modes (ie, towards+∞,−∞ or 0), the breakpoints are the FP numbers. For ...
2011 IEEE 20th Symposium on Computer Arithmetic, 2011
Electronics Letters, 2006