FACEngine ID Performance & Accuracy

aNIMETRICS has performed extensive testing on its FACEngine ID system. The performance tests have been designed to simulate a variety of applications to which FACEngine ID is suitable.
Animetrics FACEngine ID has been designed to accomodate many of the confounding variables which have hindered the deployment of facial biometric solutions. Three core technologies provide FACEngine ID with unmatched capabilities for robust facial identification. Avatar Generation provides invariance to pose, robust head tracking provides reliable localization of faces even in highly cluttered images, and photometric invariance and lighting correction allows FACEngine ID to accomodate a wide range of environments from overhead office lighting to noonday sunlight to dusk.
FACEngine ID Component Accuracy

FACEngine ID has several components which contribute, ultimately, to the overall system performance. Each of the components is employed serially to produce the most accurate facial biometric solution possible. These components include:
  1. 2D Image Feature Analysis (2DIFA)
  2. Avatar Generation and Pose Normalization
  3. Matching Accuracy

2D IMage Feature Analysis Accuracy testing
The 2D Image Feature Analysis (2DIFA) is a fundamental component of Animetrics' facial biometric system. The results of the analysis of the image plane drive the performance of all subsequent operations. If the image analysis is substandard, even a "perfect" matching algorithm will fail. Accurate and robust image analysis is the cornerstone of all facial biometric systems. Competing trackers obtain computational efficieny by examining a small number of features. Animetrics 2DIFA uses rapid tracking of the dense continuum of points in the projective geometry to obtain accurate results across a broader range of conditions and is robust to a greater variety of poses.
Animetrics evaluates 2DIFA accuracy by obtaining ground truth values of feature coordinates through the meticulous hand-landmarking of images, and comparing those values to the automatically generated values produced by the 2DIFA module. Detailed reports on 2DIFA accuracy tests can be found here.
Avatar Generation and Pose Normalization
A process unique to Animetrics, our Avatar Generation consists of producing a 3-dimensional model of a subject using highly complex mathematical algorithms to analyze 2-dimensional images and, through a process of smooth deformations, generate realistic, accurate, fully structured and defined 3-dimensional avatars. The generated 3-dimensional model can be easily reoriented to vastly improve identification performance with off-pose data. Animetrics tests and validates the precision of this avatar generation and pose normalization by employing synthetic images generated through the manual landmarking of facial imagery which is rendered at a predetermined pose. We are then able to analyze the projected feature coordinates of the synthetic image, feed those values back into the avatar generation module and compare the known original results with the newly generated values. Details of this testing can be found here
Signature Generation and Matching
The final component is comprised of the methods and algorithms which perform the actual comparison of faces to a watchlist or other database of face records. There are two sub-components at play here: 1) Facial Signature Generation; and 2) the matching algorithm.
Facial signature generation distills the image to a low dimension representation that is optimal for performing identification. Apriori, the matching algorithm computes a set of rules that is optimal for discriminating between people in the database or watchilist and the world. These rules are then used to decide if an incoming probe belongs to the database or watchlist.
The tests performed to date are an expression of watchlist identification, which is the comparison of a subject (probe) against a set of records representing the watchlist (gallery). This is also referred to as a 1-to-many test. Facial recognition performance for watchlist identification is generally represented through the comparison of two variables: False Accept Rate (FAR) and False Reject Rate (FRR). The two errors any Identification system can make are a false accept (identifying an individual as someone they are not) and false reject (the failure to identify someone as themself). These are then expressed as rates, or percentages. These values are generally inversely related, i.e., as FAR decreases, FRR increases. For reference, these values can alternatively be represented as True Reject Rate (which equals 1-FAR) and True Accept Rate (which equals 1-FRR); True Accept Rate (TAR) is synonymous with Identification rate. A variety of tests were carried out comparing different configurations of the Identification System. Three datasets were used in this sequence of tests, FRGC controlled, FRGC uncontrolled, and FERET. Details of this testing can be found here

Copyright ©2006 - ANIMETRICS, INC - All Rights Reserved