Scale Invariant Feature Transform for n-Dimensional Images (n-SIFT (original) (raw)

SIFT: -Dimensional Scale Invariant Feature Transform

We propose the -dimensional scale invariant feature transform ( -SIFT) method for extracting and matching salient features from scalar images of arbitrary dimensionality, and compare this method's performance to other related features. The proposed features extend the concepts used for 2-D scalar images in the computer vision SIFT technique for extracting and matching distinctive scale invariant features. We apply the features to images of arbitrary dimensionality through the use of hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. We analyze the performance of a fully automated multimodal medical image matching technique based on these features, and successfully apply the technique to determine accurate feature point correspondence between pairs of 3-D MRI images and dynamic CT data.

$ n −SIFT:-SIFT: SIFT: n $-Dimensional Scale Invariant Feature Transform

Image Processing, IEEE …, 2009

We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).

Keypoint descriptor matching with context-based orientation estimation

This paper presents a matching strategy to improve the discriminative power of histogram-based keypoint descriptors by constraining the range of allowable dominant orientations according to the context of the scene under observation. This can be done when the descriptor uses a circular grid and quantized orientation steps, by computing or providing a global reference orientation based on the feature matches.

Performance Evaluation of Scale Invariant Feature Transform

International Journal of …, 2009

Abstract— The SIFT algorithm produces keypoint descriptors. This paper analyzes that the SIFT algorithm generates the number of keypoints when we increase a parameter (number of sublevels per octave). SIFT has a good hit rate for this analysis. The algorithm was tested over a ...

N-sift: N-dimensional scale invariant feature transform for matching medical images

… Imaging: From Nano to Macro, 2007 …, 2007

We present a fully automated multimodal medical image matching technique. Our method extends the concepts used in the computer vision SIFT technique for extracting and matching distinctive scale invariant features in 2D scalar images to scalar images of arbitrary dimensionality. This extension involves using hyperspherical coordinates for gradients and multidimensional histograms to create the feature vectors. These features were successfully applied to determine accurate feature point correspondence between pairs of medical images (3D) and dynamic volumetric data (3D+time).

Keypoint descriptor matching with context-based orientation estimation–Additional material–

This paper presents a matching strategy to improve the discriminative power of histogram-based keypoint descriptors by constraining the range of allowable dominant orientations according to the context of the scene under observation. This can be done when the descriptor uses a circular grid and quantized orientation steps, by computing or providing a global reference orientation based on the feature matches.

Potential of SIFT, SURF, KAZE, AKAZE, ORB, BRISK, AGAST, and 7 More Algorithms for Matching Extremely Variant Image Pairs

Extremely variant image pairs include distorted, deteriorated, and corrupted scenes that have experienced severe geometric, photometric, or non-geometric-non-photometric transformations with respect to their originals. Real world visual data can become extremely dusty, smoky, dark, noisy, motion-blurred, affine, JPEG compressed, occluded, shadowed, virtually invisible, etc. Therefore, matching of extremely variant scenes is an important problem and computer vision solutions must have the capability to yield robust results no matter how complex the visual input is. Similarly, there is a need to evaluate feature detectors for such complex conditions. With standard settings, feature detection, description, and matching algorithms typically fail to produce significant number of correct matches in these types of images. Though, if full potential of the algorithms is applied by using extremely low thresholds, very encouraging results are obtained. In this paper, potential of 14 feature detectors: SIFT, SURF, KAZE, AKAZE, ORB, BRISK, AGAST, FAST, MSER, MSD, GFTT, Harris Corner Detector based GFTT, Harris Laplace Detector, and CenSurE has been evaluated for matching 10 extremely variant image pairs. MSD detected more than 1 million keypoints in one of the images and SIFT exhibited a repeatability score of 99.76% for the extremely noisy image pair but failed to yield high quantity of correct matches. Rich information is presented in terms of feature quantity, total feature matches, correct matches, and repeatability scores. Moreover, computational costs of 25 diverse feature detectors are reported towards the end, which can be used as a benchmark for comparison studies.

A Comparison of SIFT and SURF

International Journal of Innovative Research in Computer and Communication Engineering, 2013

Accurate, robust and automatic image registration is critical task in many applications. To perform image registration/alignment, required steps are: Feature detection, Feature matching, derivation of transformation function based on corresponding features in images and reconstruction of images based on derived transformation function. Accuracy of registered image depends on accurate feature detection and matching. So these two intermediate steps are very important in many image applications: image registration, computer vision, image mosaic etc. This paper presents two different methods for scale and rotation invariant interest point/feature detector and descriptor: Scale Invariant Feature Transform (SIFT) and Speed Up Robust Features (SURF). It also presents a way to extract distinctive invariant features from images that can be used to perform reliable matching between different views of an object/scene.

A Proposed Method of Selecting Pairs of Sift Points

2019

Pattern matching between two scenes is an important part in image or pattern recognition and registration. It based on finding the best match between two scenes using some selected control points. Image of Solar Planets are somehow difficult in defining distinctive features for matching especially when compared with Earth’s image (ex. Contrast, resolution). Geometrical transformation (scaling and rotation) may affect the registration due to improper selection of correct control points. The degree of distortion due to geometrical transform may affect numbers and accuracy of match. In this article, a new auto-selection method for the reference points is established. First, the scale compensation, a new procedure has evaluated for the range of scaling factor (0.5-1.5). Also, the method was established when there is a rotation effects associated with the scaling one. It was evaluated for rotation range (1o-90o) with the same range of scaling. Second, new matching method for finding the ...

A COMPREHENSIVE AND COMPARATIVE SURVEY OF THE SIFT ALGORITHM - Feature Detection, Description, and Characterization

Proceedings of the International Conference on Computer Vision Theory and Applications, 2012

The SIFT feature extractor was introduced by Lowe in 1999. This algorithm provides invariant features and the corresponding local descriptors. The descriptors are then used in the image matching process. We propose an overview of this algorithm: the methodology and the tricky steps of its implementation, properties of the detector and descriptor. We analyze the structure of detected features. We finally compare our implementation to others, including Lowe's.