statement analysis - The Skeptic's Dictionary (original) (raw)
Statement analysis (aka content analysis) analyzes the content of statements to detect whether information provided by a suspect is truthful or intentionally incomplete. Supposedly, there are linguistic cues a person gives that can reveal concealed, missing, or false information. Those who defend this technique of interrogation believe that they have a reliable method to detect deception in ways that go beyond the obvious technique of making logical inferences from what is stated and identifying implausible claims based on general or specific knowledge. Anyone can detect a lie, for example, when a suspect is caught in a contradiction or makes statements that are inconsistent with one another. To someone with substantial background knowledge relevant to the issue being investigated, it is often obvious when a person making a statement has omitted important information. (On the other hand, one wonders why investigators in Satanic ritual abuse cases didn't balk at claims involving giraffes in the classroom or flying airplanes from Los Angeles to the desert to have sex with elephants, etc.) In any case, those like Avinoam Sapir, who developed what he calls Scientific Content Analysis (SCAN), think they've discovered something that goes beyond mere logical and common sense analysis of people's statements.
Sapir claims to know that John Ramsey was "an abuser and knows who killed his daughter [Jon Benet]." He analyzed the CNN interview of the Ramseys done about a week after the murder of their daughter.* He knows this by Mr. Ramsey's choice of words. Sapir also claims that Magic Johnson got infected with HIV in a bisexual encounter. He knows this because Johnson never said he wasn't a bisexual, only that he wasn't a homosexual, and he said he was certain he got HIV from a woman. According to Sapir, using the word 'certain' indicates "a lack of certainty."* I wonder how certain Sapir is of that claim.
According to the website for the Laboratory for Scientific Interrogation Scientific Content Analysis:
SCAN by LSI, developed by Avinoam Sapir, is the most effective technique available for obtaining information and detecting deception from statements of witnesses or suspects. SCAN (sometimes known as Statement Analysis) is an essential tool for law enforcement personnel, investigators, social service personnel, and anyone else who needs to obtain information from written material.
Sapir, who has done some work for the Jerusalem Police Department in Israel as a polygraph examiner and may even have been a member of the Mossad, has bachelor degrees in psychology and criminology, but he picked up his linguistics on the job and by doing his own research.
It seems that the market for SCAN is the same as the market for the Quadro Tracker, the DKL LifeGuard, the TKS-2000, the ADE 651, and the polygraph: law enforcement, including the FBI (see p.7, "Are You Telling Me the Truth?").
LSI claims that a linguistic analysis of a written statement by a suspect will
solve every case for you quickly and easily. You only need the subject's own words, given of his/her own free will.
SCAN will show you whether the subject is truthful or deceptive, what information the subject is concealing, and whether or not the subject was involved in the crime.*
LSI boasts that "while others are out searching for physical evidence, you have already solved the case--using only the subject's own words." And with SCAN you won't have to spend hours doing "stressful interviewing," doing reverse speech analysis, or taking tedious courses in neuro-linguistic programming on "how to read any size body language." Furthermore, anyone can learn the technique in 26 hours for only $600.
The SCAN technique is now being used by the FBI and other federal agencies; by law enforcement agencies and military agencies throughout the U.S., Canada and Australia; by bank and insurance investigators; and by private industry.... SCAN is also being used in many other countries, including the U.K., Belgium, the Netherlands, Israel, Mexico, and South Africa.*
How does SCAN work? You begin by having the subject write a statement such as the following:
On February 22, 1989, a bundle of 10's [sic] totaling 5,000.00 dollars was found in locker #3, where my cash drawer is kept. The date stamped on the straps of the bundle is that of the 31st of January 1989, on this day as on most Tuesday [sic] I am responsible for balancing the vault. At approximately 2:00 p.m. I balanced the vault. The currency is then placed in vault locker #5. If #5 is locked then the currency is placed in any open locker and locked, if I am doing the vault then I will put it in locker #3. I did not have a chance to find someone to tell them before they went to the vault. If I placed the bundle in locker #3 then it was there from the 31'st [sic] of January until it was discovered on the 22'nd [sic] of February. I had no knowledge of the missing money. I've been with this bank for more than two years and if in that time you are unaware of my trustworthiness then I suggest we need to come to some sort of agreement so this does not happen again.
You then solve the case by applying special scientific linguistic techniques to the statement. For example, you will learn that
People who work in banks work with "currency", "bundles", etc. They do not work with "money". People cannot spend "currency" or "bundles". They can only spend "money". When the teller referred to the "missing money", she incriminated herself.
It's really that simple. An untrained investigator might think that more evidence would be needed before going to trial. In fact, SCAN makes trials unnecessary. Guilt or innocence is so much easier to discover by analyzing words than by the old-fashioned method of having to prove guilt beyond a reasonable doubt. Actually, SCAN is not permitted in court. It's recommended use is "to narrow the focus of interviews and investigations to save time and money and even lives." That last bit of information came via e-mail from someone identifying himself as a "noted deception expert." His other credentials are impressive (20 years as professor of Criminal Justice, Past Dean of an Online College of Criminal Justice, 10 years as a research instructor, 5 years as a researcher for the Dept. of Corrections). He may be right about saving time and money, but if the method isn't foolproof it could cost lives, not save them. The wrong person could be detained, while the real criminal makes his escape, based on a misinterpretation of some bumbling statement made by the one being interrogated. Every good detective knows that it is a major mistake to identify a suspect as guilty too early in the investigation. SCAN doesn't seem to be any more useful or accurate a tool for an investigator than the polygraph.
It doesn't do much to instill faith in law enforcement when we see law enforcement officers taking classes from people they should be investigating. In their defense, law enforcers claim that things like SCAN, the polygraph, profiling, and the voice stress analyzer "work." It helps them catch the bad guys because some of the bad guys are ignorant and think these things can really detect lies with some provable degree of validity. Some of the ignorant are intimidated into confessing. They "work" in the same sense that torture or extortion work: they get the result you want some of the time.
in the name of science
There is very little scientific research into statement analysis. One of the more vocal proponents of it is FBI special agent Susan H. Adams. Adams believes that statement analysis will allow an investigator to
gain insight into a suspect prior to conducting an interview. By learning more about a suspect and determining whether that person is being deceptive, they have a much better chance of identifying the guilty party and gaining a confession.
The purpose of statement analysis, according to Adams, is to gain a confession. The first assumption this method uses is that there is a distinct difference between truthful and fabricated statements that can be identified independently of matching the statements to the facts. She also believes "that truthful people give statements using the pronoun 'I,' which is first person, singular. Any deviation from this norm deserves close scrutiny, for it could be an indication that the person is not totally committed to the facts in the statement and, therefore, is not telling the whole truth." This last claim is based not on any scientific analysis but rather is based on what "investigators have noted." She makes many other claims about pronouns, shifting from one pronoun to another, nouns, being concise as opposed to offering too much information, and many other things. None of these claims are backed up by relevant research or studies. Some of the claims are just common sense, e.g., when a person for whom you have good evidence supporting the belief that he was present when certain events occurred keeps repeating "I don't remember," he's probably lying.
Adams's doctoral dissertation (2002), which she called an "exploratory study," examined "sixty narratives written by suspects and victims identified through the investigation of criminal incidents." The truth or deceptiveness of the narratives was known before the examination of the statements. She looked for patterns of "linguistic and structural attributes" in statements known to be true or deceptive. Not being blinded, she could have unconsciously evaluated the different statements according to preconceived notions. In any case, she found that the "most significant predictor of veracity was unique sensory details; the most significant predictor of deception was relative length of the prologue partition." The strongest conclusion she was able to make was
The analysis of the examined narratives written by suspects and victims suggests that linguistic and structural features of written narratives are predictive of the likelihood of veracity and deception. These results lend support to the Undeutsch Hypothesis (1989) that truthful narratives differ from fabricated narratives in structure and content. (italics added)
A Google search of the "Undeutsch Hypothesis" brings up only about 8k hits. There's nothing in Wikipedia about this hypothesis. The number one Google hit is on an article in The Journal of Credibility Assessment and Witness Psychology, which is published by the psychology department of Boise State University.
The evaluation of individual statements of crime witnesses is a task that has been performed by German psychologists for legal courts in the last 40 years in estimated tens of thousands of cases. The methodology employed relies extensively on Criteria-Based Content Analysis (CBCA; Raskin & Esplin, 1991; Steller & Koehnken, 1989; Undeutsch, 1967, 1982) which was developed in Germany and Sweden from the 1950s. According to the "Undeutsch hypothesis", subjectively truthful experience-based reports will differ in a wide range of aspects, the "reality criteria", from fabricated accounts of events that were not actually experienced. This intuition has been recently validated in a number of field studies and experimental simulations in which marked differences were found between truthful reports and fictitious stories on most of the 19 criteria currently in use (e.g., Horowitz, Lamb, Esplin, Boychuk, Krispin & Reiter-Lavery, 1997; Lamb, Sternberg, & Esplin, 1994). (See the article for the references.)
There may be 19 criteria in use to distinguish truthful from fictitious stories told by people under investigation, but there is no theoretical basis for the differences that appear in such accounts. Psychologist Aldert Vrij says of SCAN in his book Detecting Lies and Deceit:
The underlying assumption of SCAN is that a statement derived from memory of an actual experience differs in content and quality from a statement based on invention or fantasy. It is therefore thought that some SCAN criteria are more likely to occur in truthful statements than in deceptive statements, whereas other criteria are more likely to occur in deceptive statements than in truthful statements. Although SCAN predicts differences between truth tellers and liars, no theoretical rationale is provided as to why these differences would occur. (p. 282)
Furthermore, there is no systematic way to eliminate differences in deceptive and truthful statements that are due to deficient memory or suggestion. Until the Undeutsch Hypothesis becomes the Undeutsch Law, I'll remain skeptical. Until there are large-scale double-blinded studies that validate the claims made for the power, accuracy, and utility of statement analysis, law enforcement should approach this technique with caution.
See also apophenia, cold reading, personology, polygraph, profiling, self-deception, and wishful thinking.
further reading
Vrij, Aldert. 2008. Detecting Lies and Deceit: Pitfalls and Opportunities. 2nd ed. Wiley.
"Statement Analysis Scan or Scam?," by Robert A. Shearer, Skeptical Inquirer, May/June 1999
Communication Under Stress: Indicators Of Veracity And Deception In Written Narratives by Susan H. Adams (doctoral dissertation, 2002)
Statement Analysis: What Do Suspects' Words Really Reveal? by Susan H. Adams, Special Agent Adams taught statement analysis as part of interviewing and interrogation courses at the FBI Academy.
Statement analysis field examination technique: a useful investigative tool, from The FBI Law Enforcement Bulletin, April 2003, by Gene Klopf and Andrew Tooke Last updated 26-Oct-2015