Saturday, May 22, 2004


Degrees of Certainty

I don't recall if my first news of this was from a network news source or a major newspaper but I do recall that my first impression had been that it was the Spanish authorities that had identified a fingerprint, a possible link to the Madrid bombings, as having come from a U.S. citizen, Brandon Mayfield. I was puzzled when, this past Thursday, Mayfield had been released from custody after Spanish officials said fingerprints found on a plastic bag near the bombing site in Spain were not those of Mayfield, but of an Algerian suspect.

Now, examining early reports, I find that it was the FBI that had identified the latent print as having come from Mayfield:
Spanish police have been intrigued by the possibility of a U.S. connection to the Madrid bombings since FBI agents informed them more than three weeks ago that a fingerprint found on a plastic bag of detonators left by the bombers appeared to match Mayfield's.
This and other reports at the time decribed the fingerprint match (obtained as a result of an FBI national fingerprint database search) as an "absolutely incontrovertible" or "bingo" match. The fact that Mayfield was a lawyer (not exactly our most respected profession), a convert to Islam and had military experience, didn't help the case that this "match" could have occurred by chance. However, even at this early stage, Spanish authorities had expressed reservations about the FBI identification:
But senior Spanish law enforcement officials said their forensics experts remain unconvinced. Fingerprint identification depends on matching a certain number of criteria between prints as well as on the interpretation of data by experts, investigators said.

'The experts here in Spain ... still have doubts about the fingerprint,' the senior Spanish official said.
How could that be -the FBI claiming a definitive identification while Spanish authorities initially question the match and later identify another as the source of the evidence? How do we reconcile the two? It would have to do with the quality of the evidence and the examination of that evidence.

Initial reports described the evidence as a "perfectly formed" fingerprint. This poses a problem. If we begin with the premise that no two persons (including identical twins) have identical prints[I submit that sufficient evidence exists to demonstrate this is a valid assumption but for the sake of this argument, whether or not you agree, take this as a given], how can two separate individuals "match" the same fingerprint evidence? I for one, was relieved when the latent print was later described as a partial print. This meant the electronic database and latent print examiners were working with limited information with which to attempt an identification and also explained this:
The report said Spanish forensics experts found only eight points of similarity between the print and the one of Mayfield held in U.S. files because he is a former Army officer.

The FBI said it found 15 such points, El Pais said.
Aha!, 8 points for the Spaniards, 15 points for the Americans. I guess that would make the Americans about twice as confident about the identification, yes? How many 'points' are necessary to declare a match/identification? There in lies the problem. Standardization (or lack thereof).

Some European countries followed a 16-point standard, that was, recently, for the most part, abandoned (perhaps with good reason but we'll leave that discussion for another time) and the U.S. hasn't employed any quantitative standard since the seventies. So, the analysis is somewhat subjective.

The database is searched and the computer algorithm comes up with and scores/ranks potential 'matches'. An examiner then makes a side by side comparison of the latent and database print, scrutinizing the points of similarity until a conclusion has been reached whether the comparison represents an exclusion (the two prints came from separate sources), an identification (similarities identified have reached a threshhold that the examiner is convinced the two prints came from the same source) or the results are inconclusive (there is insufficient detail represented in the latent to either definitively exclude or include the known print as the source of the latent).

The number of points necessary to make an identification is dependent on the confidence of the examiner which is in turn dependent on their training, experience and the qualitative nature of the prints as well as their points of comparison. So, let's give some perspective to the 8 and 15 points of similarity identified by comparisons made in this case (I haven't see any source documenting the number of points associated with the latest identification made to the Algerian man by Spanish officials). For that, I will turn to what is a common practice by many labs in the verification of a DNA database match.

Offender DNA databases consist of DNA profiles obtained from individuals who have been convicted of various crimes (the crimes vary by state-to-state depending on statute). In some states the practice is to simultaneously collect a thumbprint and DNA sample from these offenders. Later, if a match is obtained between a crime scene sample and an offender DNA sample the match is often 'verified' both by the re-analysis of the offender DNA sample and a search of the fingerprint database with the print originally collected from the offender.

So, the goal is to verify identity by searching the fingerprint database with a thumbprint taken from the individual at one point in time (time of DNA collection) with that taken from the individual at another point in time (time of their arrest). The common result of such a search yields more than 50 points of similarity between these prints (although that number can be reduced if one or both of the prints is of poor quality). Two points can be taken away from this exercise. First, a match consisting of either 8 or 15 points is clearly one based on a comparison of a latent that is either a partial print, of poor quality, or both. Second, the quality of the fingerprints stored in the database is also a significant factor in the quality/strength of an identification.

So, what do we make of this current identification conundrum? Let's examine what we know. I guess technically, we don't know (unsurprisingly), based on the reporting, that the Spanish and American identifications were made to the same print. Although, given that Mayfield was released from custody following this information it is likely the case. However, that presupposes his timely release is necessarily related to this new information.

Let's take it to be true that the latent print that was previously identified as Mayfield's by the FBI, a match of which Spanish officials were skeptical, has now been identified as having come from some Algerian man. Presumably, since the Spaniards thought 8 points of similarity were previously insufficient for identification of this latent print, this new comparison has yielded similarities greater in quantity and/or quality (as there is no absolute 'quantitative' threshhold applied). Furthermore, we don't hear the FBI screaming "foul" and we can be fairly certain that they examined the new match before releasing Mayfield. But, but you [FBI] said it was his! Indeed. What happened?

A collective "shit!" was no doubt heard from latent examiners everywhere who are wondering the same thing. We come back to the point of subjective analysis and standards. It would appear that the FBI examiners (I say examiners because generally, a second examiner must concur with the identification before it is reported - I'm assuming that happened) were in error in their 8-point identification. So, did those 8 points reflect good quality solid data and, therefore indicate that the use of a minimal point standard should be re-examined and the threshhold clearly be set above 8 points?

Or were these examiners overreaching, overinterpreting the data? If so, was it the result of inexperience - the difficulty of this examination exceeding their expertise? One would expect, however, in a case of this magnitude, that more experienced examiners would have been assigned. Or was any overinterpretation due to bias? Perhaps the ranking given by the automated fingeprint identification system wasn't all that impressive but when they (or others) discovered information regarding the potential match, particularly his conversion to Islam, it was deemed beyond 'coincidental'. Perhaps, at that point, the putative match was 're-examined' to affect an identification. This is clearly a more serious problem because it cannot be addressed by merely adding a quantitative threshold standard for identification.

To most scientists, the standards currently in use by latent examiners and those proposed by the Scientific Working Group on Friction Ridge Analysis, Study and Technology are so vague as to be almost nonexistent. Yet while improvements in these standards are necessary most scientists, and certainly those with forensic experience, will allow that qualitative compensation can sometimes mitigate quantitative deficiency.

This case raises old questions but one thing is evident ... as these databases grow larger and the information is shared among more disparate groups, standardization of methodologies is a necessity. Adoption of some minimal, but absolute, criteria for identification is inevitable. As far as this particular case? Well, that's ...developing.

1 Comments:

Blogger emily said...

I really like your conversation on dna testing. I have a dna testing secrets blog if you wanna come on over and check my stuff out.

6:49 AM  

Post a Comment

<< Home