A new class of rotational invariants using discrete orthogonal moments (original) (raw)

Rotational invariants for Tchebichef moments

Ieice Electronic Express, 2010

Image moments that are invariants to distortions such as translation, scale and rotation are an important tool in pattern recognition. In this paper, derivation of invariants for Tchebichef moments with respect to rotation will be presented. The rotational invariants are achieved neither by tempering with the image nor transforming the coordinates from rectangular to polar. They are derived using moment normalization method, which attempts to map the distorted moments with the undistorted ones. Experimental results show that the derivation is correct and it poses as a viable solution to test whether one image is a rotationally distorted version of another.

A Comparative Analysis of Radial-Tchebichef Moments and Zernike Moments

Procedings of the British Machine Vision Conference 2009, 2009

Orthogonal moment descriptors are commonly used in applications such as image classification, pattern recognition and identification. A radial-polar representation of image coordinate space is particularly useful in the above applications, since it facilitates the derivation of rotation invariants of any arbitrary order. Zernike moments and radial-Tchebichef moments fall into the category of moments that are defined using radial-polar coordinates. The discrete orthogonal nature of the kernel of radial-Tchebichef moments provides notable advantages over continuous Zernike moments. The paper presents a detailed analysis to prove that radial-Tchebichef moments have superior features compared to Zernike moments and are computationally less complex. The paper also introduces a novel and fast method for accurate computation of radial moments, preserving rotation-invariant characteristics. The efficiency of the proposed method is demonstrated through a series of experimental results.

Translation invariants of Zernike moments

Pattern Recognition, 2003

Moment functions deÿned using a polar coordinate representation of the image space, such as radial moments and Zernike moments, are used in several recognition tasks requiring rotation invariance. However, this coordinate representation does not easily yield translation invariant functions, which are also widely sought after in pattern recognition applications. This paper presents a mathematical framework for the derivation of translation invariants of radial moments deÿned in polar form. Using a direct application of this framework, translation invariant functions of Zernike moments are derived algebraically from the corresponding central moments. Both derived functions are developed for non-symmetrical as well as symmetrical images. They mitigate the zero-value obtained for odd-order moments of the symmetrical images. Vision applications generally resort to image normalization to achieve translation invariance. The proposed method eliminates this requirement by providing a translation invariance property in a Zernike feature set. The performance of the derived invariant sets is experimentally conÿrmed using a set of binary Latin and English characters.

Translation and scale invariants of Tchebichef moments

Pattern Recognition, 2007

Abstract⎯Discrete orthogonal moments such as Tchebichef moments have been successfully used in the field of image analysis. However, the invariance property of these moments has not been studied mainly due to the complexity of the problem. Conventionally, the translation and scale invariant functions of Tchebichef moments can be obtained either by normalizing the image or by expressing them as a linear combination of the corresponding invariants of geometric moments. In this paper, we present a new approach that is directly based on Tchebichef polynomials to derive the translation and scale invariants of Tchebichef moments. Both derived invariants are unchanged under image translation and scale transformation. The performance of the proposed descriptors is evaluated using a set of binary characters.

Numerical experiments on the accuracy of rotation moments invariants

Rotationally invariant moments constitute important techniques applicable to a versatile number of pattern recognition applications. Although the moments are invariant with regard to spatial transformations, in practice, due to the finite screen resolution, the spatial transformation themselves affect the invariance. This phenomenon jeopardizes the quality of pattern recognition. Therefore, this paper presents an experimental analysis of the accuracy and efficiency of discrimination under the impact of the most important spatial transformations such as rotation and scaling. We evaluate experimentally the impact of the noise induced by the spatial transformations on the most popular basis functions such as Zernike polynomials, Mellin polynomials and wavelets. The analysis reveals that the wavelet based moment invariants constitute one of the best choices to construct noise resistant features. q

Translation and scale invariants of three-dimensional Tchebichef moments

2015 Intelligent Systems and Computer Vision (ISCV), 2015

Three-dimensional digital images are gaining more attention of image processing application and pattern classification. Discrete Tchebichef moments are usually used pattern classification in the field of image processing application and pattern classification. In this paper, we introduce threedimensional translation and scale invariants of tree-dimensional Tchebichef moments. They are algebraically derived directly from Tchebichef moments. Simulated experiments using 3-D MRI head image are carried out to verify the validity of the proposed invariance.

The scale invariants of pseudo-Zernike moments

Pattern Analysis & Applications, 2003

The definition of pseudo-Zernike moments has a form of projection of the image intensity function onto the pseudo-Zernike polynomials, and they are defined using a polar coordinate representation of the image space. Hence, they are commonly used in recognition tasks requiring rotation invariance. However, this coordinate representation does not easily yield a scale invariant function because it is difficult to extract a common scale factor from the radial polynomials. As a result, vision applications generally resort to image normalisation method or using a combination of scale invariants of geometric or radial moments to achieve the corresponding invariants of pseudo-Zernike moments. In this paper, we present a mathematical framework to derive a new set of scale invariants of pseudo-Zernike moments based on pseudo-Zernike polynomials. They are algebraically obtained by eliminating the scale factor contained in the scaled pseudo-Zernike moments. They remain unchanged under equal-shape expansion, contraction and reflection of the original image. They can be directly computed from any scaled image without prior knowledge of the normalisation parameters, or assistance of geometric or radial moments. Their performance is experimentally verified using a set of Chinese and Latin characters. In addition, a comparison of computational speed between the proposed descriptors and the present methods is also presented.

Accurate Orthogonal Circular Moment Invariants of Gray-Level Images

Journal of Computer Science, 2011

Problem statement: Orthogonal circular moments of gray level images such as Zernike, pseudo Zernike and Fourier-Mellin moments are widely used in different applications of image processing, pattern recognition and computer vision. Computational processes of these moments and their translation and scale invariants still an open area of research. Approach: a unified methodology is presented for efficient and accurate computation of orthogonal circular moment invariants. The orthogonal circular moments and their translation and scale invariants are expressed as a linear combination of radial moments of the same order in polar coordinates, where the later moments are accurately computed over a unit disk. A new mapping method is proposed where the unit disk is divided into non-overlapped circular rings; each of these circular rings is divided into a number of circular sectors of the same area. Each circular sector is represented by one point in its centre. The total number of input Cartesian image pixels is equal to the number of mapped circular pixels. Results: The implementation of this method completely removes both approximation and geometrical errors produced by the conventional methods. Numerical experiments are conducted to prove the validity and efficiency of the proposed method. Conclusion: A unified methodology is presented for efficient and accurate computation of orthogonal circular moment invariants.

Radial Tchebichef Invariants for Pattern Recognition

TENCON 2005 - 2005 IEEE Region 10 Conference, 2005

This paper presents the mathematical framework of radial Tchebichef moment invariants, and investigates their feature representation capabilities for pattern recognition applications. The radial Tchebichef moments are constructed using the discrete orthogonal Tchebichef polynomials as the kernel, and they have a radial-polar form similar to that of Zernike moments. The discrete form of the moment transforms make them particularly suitable for image processing tasks. Experimental results showing the primary attributes such as invariance and orthogonality of the proposed moment functions are also given.

Radial invariant of 2D and 3D Racah moments

Multimedia Tools and Applications, 2017

In this paper, we introduce new sets of 2D and 3D rotation, scaling and translation invariants based on orthogonal radial Racah moments. We also provide theoretical mathematics to derive them. Thus, this work proposes in the first case a new 2D radial Racah moments based on polar representation of an object by one-dimensional orthogonal discrete Racah polynomials on non-uniform lattice, and a circular function. In the second case, we present new 3D radial Racah moments using a spherical representation of volumetric image by onedimensional orthogonal discrete Racah polynomials and a spherical function. Further 2D and 3D invariants are extracted from the proposed 2D and 3D radial Racah moments respectively will appear in the third case. To validate the proposed approach, we have resolved three problems. The 2D/ 3D image reconstruction, the invariance of 2D/3D rotation, scaling and translation, and the pattern recognition. The result of experiments show that the Racah moments have done better than the Krawtchouk moments, with and without noise. Simultaneously, the mentioned reconstruction converges rapidly to the original image using 2D and 3D radial Racah moments, and the test 2D/3D images are clearly recognized from a set of images that are available in COIL-20 database for 2D image, and PSB database for 3D image.