Last update: Sept. 1, 1998


Publications by INRIA on Recognition & related problems :

BibTeX references.


Computing the mean of geometric features: Application to the mean rotation

Xavier Pennec
RR-3371, Sophia Antipolis, Projet EPIDAURE, March 1998

Abstract

The question we investigate in this article is: what is the mean value of a set of geometric features and how can we compute it? We use as a guiding example one of the most studied type of features in computer vision and robotics: 3D rotations. The usual techniques on points consist of minimizing the least-square criterion, which gives the barycenter, the weighted least-squares or the sum of (squared) Mahalanobis distances. Unfortunately, these techniques rely on the vector space structure of points and generalizing them directly to other types of features could lead to paradoxes. For instance, computing the barycenter of rotations using rotation matrices, unit quaternions or rotation vectors gives three different results.

We present in this article a thorough generalization of the three above criterions to homogeneous Riemannian manifolds that rely only on intrinsic characteristics of the manifold. The necessary condition for the mean rotation, independently derived in [Den96], is obtained here as a particular case of a general formula. We also propose an intrinsic gradient descent algorithm to obtain the minimum of the criterions and show how to estimate the uncertainty of the resulting estimation. These algorithms prove to be not only accurate but also efficient: computations are only 3 to 4 times longer for rotations than for points. The accuracy prediction of the results is within 1%, which is quite remarkable.

The striking similarity of the algorithms' behavior for general features and for points stresses the validity of our approach using Riemannian geometry and lets us anticipate that other statistical results and algorithms could be generalized to manifolds in this framework.

Keywords: mean feature, mean rotation, fusion, Riemannian geometry.


Toward a Generic Framework for Recognition Based on Uncertain Geometric Features

Xavier Pennec
Videre, vol.1(2), pp.58-87

Abstract

The recognition problem is probably one of the most studied in computer vision. However, most techniques were developed on point features and were not explicitly designed to cope with uncertainty in measurements. The aim of this paper is to express recognition algorithms in terms of uncertain geometric features (such as points, lines, oriented points, or frames). In the first part we review the principal matching algorithms and adapt them to work with generic geometric features. Then we analyze some noise models on geometric features for recognition, and we consider how to cope with this uncertainty in the matching algorithms. We then identify four key problems for the implementation of these algorithms. Last but not least, we present a new statistical analysis of the probability of false positives that demonstrates a drastic improvement in confidence and complexity that we can obtain by using geometric features more complex than points.

Keywords: 3-D object recognition, invariants of 3-D objects, feature-matching, uncertain geometric features, generic features, matching error analysis


A Framework for Uncertainty and Validation of 3D Registration Methods based on Points and Frames.

Xavier Pennec & J.P. Thirion
International Journal of Computer Vision, vol.25(3), 1997, pp. 203-229.


Randomness and Geometric Features in Computer Vision

Xavier Pennec, Nicholas Ayache
RR-2820, Sophia Antipolis, Projet EPIDAURE, March 1996

Abstract

Randomness and Geometric Features in Computer Vision It is often necessary to handle randomness and geometry in computer vision, for instance to match and fuse together noisy geometric features such as points, lines or 3D frames, or to estimate a geometric transformation from a set of matched features. However, the proper handling of these geometric features is far more difficult than for points, and a number of paradoxes can arise. We try to establish in this article the basic mathematical framework required to avoid them and analyze more specifically 3 basic problems:

  1. what is a random distribution of features ?
  2. how to define a distance between features ?
  3. what is the ``mean feature'' of a number of feature measurements ?

We insist on the importance of an invariance hypothesis for these definitions relative to a group of transformations. We develop general methods to solve these three problems and illustrate them with 3D frame features under rigid transformations. The first problem has a direct application in the computation of the prior probability of false match in classical model-based object recognition algorithms, and we present experimental results of the two others for a data fusion problem : the statistical analysis of anatomical features (extremal points) automatically extracted on 24 three dimensional images of the head of a single patient. These experiments success fully confirm the importance of the rigorous requirements presented in this article.

Keywords : Geometric features, Transformation groups, Randomness, Invariant measure, Invariant distance, Expected features, Mean features.


Page created & maintained by Frederic Leymarie, 1998.
Comments, suggestions, etc., mail to: leymarie@lems.brown.edu