Page 167 - Automated Fingerprint Identification Systems (AFIS)
P. 167
152 AUTOMATED FINGERPRINT IDENTIFICATION SYSTEMS
The decision of whether to perform a benchmark and when to do it has to
factor in costs in travel and staff and consultant time, the purpose, and the
potential value added. Benchmarking is a management decision that needs to
be made early. The less familiarity you have with state-of-the-art AFIS, the more
appealing and important a benchmark becomes.
Benchmarks require test background files and a rigorous plan. It is very hard
to tell exactly what is going on in an AFIS undergoing a benchmark. Such
factors as how many fingers are actually being searched, what threshold score
is being used, and which filters, if any, are in use are nearly impossible to inde-
pendently ascertain. You will find that you are at the mercy of the vendor for
answers to these and other issues. If you do decide to benchmark, then it is
imperative that you have a significantly large database with data from your own
users. If you are using single fingers, the number of records needed is differ-
ent from that required for tenprint searches.
Once the number of tenprint records in a repository reaches approximately
400,000, the false match rate starts to go up. Rarely, however, will you have the
luxury of such a large benchmark database to be used for each vendor, as it can
take almost a minute to extract the minutiae from one set of ten rolled impres-
sions. Extracting the data from 500,000 records would take about 9 months on
one machine; nine dedicated machines working around the clock could com-
plete the task in 1 month. It is unreasonable to ask each vendor to dedicate
that much hardware for such a long time for benchmark preparation. If you
cannot provide at least 100,000 of your own tenprint records for a background
file, then you should consider an alternative approach, other than letting the
vendors each provide their own hand-tailored background data. The best alter-
native for tenprint benchmarking would be along the lines of 3,000 to 4,000
tenprint records run against 3,000 to 4,000 different tenprint records from the
same people.
At some point the error rate for binning by pattern type starts to be eclipsed
by the false match rate. The exact point is different for each system and each
database, but no benchmark is likely to be large enough to reach that point—
yet your system is likely to cross that threshold on the first day of operations.
Unfortunately, benchmarks tend to mask this and other issues.
There are three approaches for using the results of benchmarking:
1. Pre-filter the list of bidders.
2. Use in evaluation of proposals.
3. Verify the apparent “winner’s” proposal claims.
A benchmark should be based on the anticipated size and functionality of your
system to the extent possible. As noted previously, size is often a major stum-