The importance of math in the administration of justice has risen with the growth of identification forensics and its influence continues to permeate questions of proof and judgment. For example, statistics (evidence) and probability (analytics)^{1} have been used and challenged in many criminal cases to match people to events through such means as: DNA, soil samples, eyewitness descriptions, firearm purchase records, typewritten documents, clothes fibers, footprints, hair follicles, blood types, sperm, teeth marks, and conviction rates.^{2} Indeed, everything from traffic tickets to predictive policing draws on math in some way.^{3}

Nonetheless, legal reasoning has on occasion too quickly arrogated mathematical logic resulting in arguments based on fallacies and confusion.^{4} It is a subject that has been debated periodically in the history of law and tests the integrity of legal decision-making in many areas that rely on numerical interpretations of human policies and actions.^{5}

David McCord, Professor of Law, Drake University Law School, observed that the legal literature on this subject tends to categorize precedent along legal and not mathematical lines, further impeding a consistent approach to this type of proof or analysis.^{6} In order to clarify these forensic applications, Prof. McCord outlined five distinct areas of mathematical evidence: (1) “empirical statistics”;^{7} (2) “probabilities of a random match”; ^{8} (3) “nonempirical probabilities of guilt incorporating empirical statistics without Bayes’ Theorem”;^{9} (4) “non-empirical probabilities of guilt developed without empirical statistics and without Bayes’ Theorem”;^{10} and (5) “non-empirical probabilities of guilt incorporating empirical statistics via Bayes’ Theorem.”^{11}

More recently, the authors of a new book exploring the promise and pitfalls of mathematics in the courtroom^{12} expertly described the current state of affairs for the New York Times:

“Decades ago, the Harvard law professor Laurence H. Tribe wrote a stinging denunciation of the use of mathematics at trial, saying that the ‘overbearing impressiveness’ of numbers tends to ‘dwarf’ other evidence. But we neither can nor should throw math out of the courtroom. Advances in forensics, which rely on data analysis for everything from gunpowder to DNA, mean that quantitative methods will play an ever more important role in judicial deliberations. The challenge is to make sure that the math behind the legal reasoning is fundamentally sound. Good math can help reveal the truth. But in inexperienced hands, math can become a weapon that impedes justice and destroys innocent lives.”^{13}

So it is that the relationship of law and math is a mile wide and a mile deep as is the bibliography of these disciplines. Thus, this article is only an attempt to collect recent, historical and otherwise notable materials regarding the role and risks of mathematics in criminal cases.

**CASES**

Commonwealth v. Ferreira, 460 Mass. 781 (Mass. Sup. Jud. Ct. 2011)

“We conclude that the prosecutor’s closing argument error created a substantial risk of a miscarriage of justice because of the danger that the jury gave undue weight to a mathematical probability analysis that supposedly demonstrated that the lone eyewitness identification on which the prosecutor’s case wholly rested constituted proof beyond a reasonable doubt, the victim’s admitted uncertainty as to the accuracy of the identification, and our recognition that “[e]yewitness identification of a person whom the witness had never seen before the crime or other incident presents a substantial risk of misidentification and increases the chance of a conviction of an innocent defendant.” Commonwealth v. Silva-Santiago, 453 Mass. 782, 796 (2009), quoting Commonwealth v. Jones, 423 Mass. 99, 109 (1996). Our conclusion is strengthened by the evidence of the defendant’s innocence: Pacheco’s testimony that Dias, not the defendant, committed the robbery with him; Dias’s testimony that he committed the robbery with Pacheco, and Bennett’s testimony that the defendant was baby-sitting her children on the evening of the robbery. While the jury apparently did not credit this evidence, it cannot be ignored in evaluating whether there was a substantial risk of a miscarriage of justice.”

Dorsey v. State, 350 A. 2d 665 (Md. Ct App 1976)

“The principal issue in the appellant’s trial was whether he was one of the perpetrators of the robbery. Detective Simmons’ testimony, attempting to establish that a large percentage of those arrested by him for robbery were ultimately proven guilty, undertook to collaterally establish the detective’s investigative successes, but had no probative value in tending to establish the proposition in issue — the identity of the appellant as one of the robbers — and was thus patently irrelevant. . . .

Permitting the detective to relate syllogistically — though imperfectly — before the jury, the high probability of the appellant’s guilt, tended to portray the officer as a ‘super-investigator’ and thus clothed his testimony, with a greater weight than that which might have been given to the testimony of the other witnesses. Thus, the jury’s basic function of weighing the conflicting evidence in arriving at a conclusion of guilt ‘beyond a reasonable doubt,’ was subjected to the counterbalancing effect of the detective’s irrelevant and extraneous opinion. Indeed in the absence of any showing of similarity between the investigation which led to the appellant’s arrest and those other investigations which led to the detective’s conviction rate, the premise posited before the jury appears to have been invalid. . . .

We conclude, as did the Court of Special Appeals, that the collateral evidence elicited from Detective Simmons, concerning his arrest-conviction record, was irrelevant and extraneous to the issue of the appellant’s guilt or innocence, and that the trial court’s ruling, permitting it, was manifestly erroneous.”

Meredith v. Com., 959 SW 2d 87 (Ky. Sup. Ct. 1997)

“Appellant claims error with the Commonwealth’s reference during closing argument to statistical probabilities concerning the DNA match between appellant and the saliva extracted from the cigarettes found in Larsen’s apartment. . . .

The Commonwealth had, at best, a weak circumstantial case. There was no eyewitness to the murder and no physical evidence placing Larson inside appellant’s truck. The single cigarette butt that allegedly had appellant’s DNA on it did nothing more than place appellant in Larson’s Danville apartment at some unknown time. It did not limit appellant’s presence to the weekend before her disappearance. Moreover, the charged offenses did not occur in Larson’s apartment. This was not harmless error and we reverse on this issue.”

Miller v. State, 399 SW 2d 268 (Ark. Sup. Ct. 1966)

“Dr. Matthews had made no tests on which he could reasonably base his probabilities of one in ten on soil color, one in one hundred on soil texture, or one in one thousand on soil density (which he multiplied together to obtain his one-in-one-million figure), nor did he base his testimony on studies of such tests made by others. He admitted that his figures were predicated on ‘estimates’ and ‘assumptions’. In short, there is no foundation upon which to base his probabilities of one in a million.

‘An expert witness’ view as to probabilities is often helpful in the determination of questions involving matters of science or technical or skilled knowledge. * * * It is necessary, however, that the facts upon which the expert bases his opinion or conclusion permit reasonably accurate conclusions as distinguished from mere guess or conjecture. * * * To admit expert testimony deduced from a scientific principle or discovery, the thing from which the deduction is to be made must be sufficiently established to have gained general acceptance in the particular field in which it belongs.’ 20 Am.Jur, Evidence, S 795, p. 668.

Admission of the unsubstantiated, speculative testimony on probabilities was clearly erroneous. See Little v. George Feed & Supply Co, 233 Ark. 78, 342 S.W.2d 668; 2 Wharton’s Criminal Evidence, S 505, p. 328 (12th ed. 1955).”

People v. Collins, 68 Cal. 2d 319 (Sup. Ct. 1968)

“We deal here with the novel question whether evidence of mathematical probability has been properly introduced and used by the prosecution in a criminal case. While we discern no inherent incompatibility between the disciplines of law and mathematics and intend no general disapproval or disparagement of the latter as an auxiliary in the fact-finding processes of the former, we cannot uphold the technique employed in the instant case. As we explain in detail, infra, the testimony as to mathematical probability infected the case with fatal error and distorted the jury’s traditional role of determining guilt or innocence according to long- settled rules. Mathematics, a veritable sorcerer in our computerized society, while assisting the trier of fact in the search for truth, must not cast a spell over him. We conclude that on the record before us defendant should not have had his guilt determined by the odds and that he is entitled to a new trial. We reverse the judgment.”

People v. Harbold, 464 NE 2d 734 (Ill. App. Ct. 1984)

“Our concern is more with the impact of the probability statistic upon the jury than the foundation for the testimony. The statistic was patently irrelevant. Probability theory is necessarily silent on the crucial question before the jury: Of the relatively small percentage of the population with consistent characteristics, which one, if any, committed the crime? (People v. Collins (1968), 68 Cal.2d 319, 330, 438 P.2d 33, 40, 66 Cal. Rptr. 497, 504.) Jurors would be hard pressed to explain how the 1-in-500 chance of an accidental match did not equate with a 1-in-500 chance that defendant was innocent. (See Tribe, Trial by Mathematics: Precision and Ritual in the Legal Process, 84 Harv. L. Rev. 1329, 1355 (1971).) Of course, the statistic means nothing of the sort. Absent a sound basis to limit the number of possible defendants, the defendant here is but one of thousands of people who share these same characteristics. Legion possibilities incapable of quantification, such as the potential for human error or fabrication, or the possibility of a frame-up, must be excluded from the probability calculation. . . .

We believe that testimony to statistical probabilities encouraged the jury to disregard evidential risks traditionally weighed in determining guilt or innocence, and focused unfairly upon a numerical conclusion. (See People v. Collins (1968), 68 Cal.2d 319, 330-31, 438 P.2d 33, 40-41, 66 Cal. Rptr. 497, 504-05.) As such, we find that the testimony violated one of the primary requirements of expert opinion, that the opinion be an aid to the jury. (See People v. Stapelton (1972), 4 Ill. App.3d 477, 480-81, 281 N.E.2d 76.) In light of the closeness of this circumstantial case, we cannot say that this improper testimony, which gave a false impression of precision in the measurement of guilt, did not affect the jury’s deliberations.”

__People v. Risley__ 214 NY 75 (1915)

“Attention has been called to this evidence in view of the evidence offered by the People to show that it was highly improbable if not impossible that defects similar to those in the machine owned by defendant exist in any other typewriting machine. The witness, who was a professor of mathematics in one of the universities of the state, under repeated objections by counsel for defendant, was permitted to testify that by the application of the law of mathematical probabilities the chance of such defects being produced by another typewriting machine was so small as to be practically a negative quantity. The witness asserted that when the facts are ascertained, the application of the law of probabilities to them is a matter of pure mathematics and not of speculation or opinion. He defined the law of probabilities as ‘A proper fraction expressing the ratio of the number of ways an event may happen divided by the total number of ways in which it can happen.’ The various defects claimed to be visible and pointed out by the experts in the specimens of typewriting made upon the defendant’s machine corresponding to like defects in the letters of the two words ‘the same’ in the alleged altered document were called to the witness’ attention by the district attorney, and he was asked to apply the law of mathematical probability thereto. . . . The witness, so far as the record discloses, was not qualified as an expert in typewriting; he had never made a study of such work, or of the machine claimed to have been used for the purpose of interpolating words in the document offered in evidence, nor did he take into consideration in arriving at his result the effect of the human operation of such a machine, that is whether the same defects would always be discernible, no matter who was the individual operating the machine. These factors and many others which we cannot foresee and which, in all likelihood, are beyond the possibility of any human being to ascertain, would enter into any calculation of this character. The statement of the witness was not based upon actual observed data, but was simply speculative, and an attempt to make inferences deduced from a general theory in no way connected with the matter under consideration supply the usual method of proof.”

People v. Truscio, 251 AD 2d 966 (NY 4th Dept. 1998)

“The contention of defendant that she was deprived of a fair trial by the admission into evidence of testimony concerning a prior uncharged arson and evidence regarding the statistical probabilities in this case is not preserved for our review (see, CPL 470.05 [2]). In any event, defendant was not thereby deprived of a fair trial. Testimony about the prior uncharged arson was introduced to establish that the motive of defendant in administering insulin to the patients was her need to be thought of as heroic (see generally, People v Ventimiglia, 52 N.Y.2d 350). With respect to the statistical evidence, Supreme Court admonished the jury that the opinions expressed by the expert were based upon a number of assumptions and judgments that the jury had to accept before it could consider the expert’s mathematical calculations. In addition, the court instructed the jury that it must determine for itself whether defendant is guilty beyond a reasonable doubt. Under the circumstances, the expert testimony did not invade “`the jury’s exclusive province of determining an ultimate fact issue in the case'” (People v Bajraktari, 154 AD2d 542, 543, lv denied 75 N.Y.2d 963, quoting People v Abreu, 114 AD2d 853, 854).”

State v. Carlson, 267 NW 2d 170 (Minn. Sup. Ct. 1978)

“Our concern over this evidence is not with the adequacy of its foundation, but rather with its potentially exaggerated impact on the trier of fact. Testimony expressing opinions or conclusions in terms of statistical probabilities can make the uncertain seem all but proven, and suggest, by quantification, satisfaction of the requirement that guilt be established ‘beyond a reasonable doubt.’ See, Tribe, Trial by Mathematics, 84 Harv.L.Rev. 1329. Diligent cross-examination may in some cases minimize statistical manipulation and confine the scope of probability testimony. We are not convinced, however, that such rebuttal would dispel the psychological impact of the suggestion of mathematical precision, and we share the concern for ‘the substantial unfairness to a defendant which may result from ill conceived techniques with which the trier of fact is not technically equipped to cope.’ People v. Collins, 68 Cal.2d 332, 66 Cal.Rptr. 505, 438 P.2d 41.

For these reasons, we believe Gaudette’s testimony that there was only a 1-in-800 chance that the foreign pubic hairs found on the victim did not come from the accused and an even more remote 1-in-4,500 chance that the head hairs did not belong to the accused was improperly received. Strauss’ testimony concerning microscopic hair comparison analysis, however, was properly admitted. In light of Strauss’ testimony, we regard Gaudette’s statement as cumulative and thus nonprejudicial on the facts of this case.”

State v. Hernandez, 531 NW 2d 348 (Wis. Ct. App. 1995)

“Hernandez has one further complaint about the expert. He claims that the expert ran afoul of Jensen when she was allowed to opine that only one percent of the children claiming sexual assault fabricate the account. We reject this argument because of the doctrine of invited response. See State v. Wolff, 171 Wis. 2d 161, 168, 491 N.W.2d 498, 501 (Ct. App. 1992). It was Hernandez himself who brought up the question during cross-examination of the expert, asking whether the expert was aware of fabricated stories concocted by children in the past. The expert answered, “I had a case where there was fabrication, Yes.” On redirect examination, the prosecutor asked the expert whether she was aware of any percentages about how many cases are fabricated. Upon objection, the prosecutor explained that the door had been opened. The trial court overruled the objection and allowed the expert to answer. The reason why this is an invited response is because Hernandez elicited an admission that children may fabricate. The prosecutor was entitled to rehabilitate the expert to explain that fabrication was not common. There was no error.[1]

[1] We point out that even if there was error, and we hold there was not, the error was harmless. First, the expert’s testimony was later ordered stricken from the record after Hernandez was able to get the expert to admit that she had no foundation for her one percent opinion. The trial court ordered that the jury not consider this testimony. Second, Hernandez put on his own expert witness during his case-in-chief who testified that ‘many’ children falsify sexual assaults. If there was error in originally admitting the testimony, it was harmless beyond any doubt. See State v. Dyess, 124 Wis.2d 525, 543, 370 N.W.2d 222, 231-32(1985).”

State v. Sneed, 414 P.2d 858 (N.M. Sup. Ct. 1966)

“Here we are asked to approve the use of mathematical odds as evidence to identify the defendant as the one purchasing the gun, that is, that the odds, in themselves, are evidence of identification. . . . We hold that mathematical odds are not admissible as evidence to identify a defendant in a criminal proceeding so long as the odds are based on estimates, the validity of which have not been demonstrated. The cause is reversed with directions to award the defendant a new trial and at said trial to proceed in accordance with the opinions expressed herein. It is so ordered.”

United States v. Massey, 594 F. 2d 676 (8th Cir. 1979)

“At best, we conclude the court’s colloquy with the witness concerning mathematical probabilities was speculative and confusing. The gravamen of the error came during the trial judge’s comment construing the testimony in terms of the probability of making an erroneous identification through the comparison of hair samples. . . .

By using such misleading mathematical odds the prosecutor “confuse[d] the probability of concurrence of the identifying marks with the probability of mistaken identification” of the bank robber. McCormick on Evidence S 204, at 487 (E. Cleary ed. 1972). In other words, the prosecutor has infused in the minds of the jury the confusion in identifying the hair with identifying the perpetrator of the crime.”

**BOOKS**

Applying Statistics in the Courtroom: A New Approach for Attorneys and Expert Witnesses (Chapman and Hall/CRC 2001)

“This book is divided into four parts. The opening chapters concern the relationship between a sample and the population from which it is drawn. Chapter 1 describes the courts’ gradual acceptance of samples and sampling methodology. Chapter 2 defines the representative random sample. Chapter 3 compares various sampling methodologies. Chapter 4 is devoted to the use of descriptive statistics in the courtroom including measures of central tendency, precision, and percentage. Chapter 5 provides a brief introduction to probability. Chapters 6, 7, and 8 describe the varying acceptance of probability-based testimony in civil, criminal, and environmental hazard cases, respectively. Chapter 9 summarizes the courts’ responses to how large a sample must be. Chapter 10 addresses the same topic from the statistician’s point of view and describes some simple procedures for testing statistical hypotheses. Chapter 11 describes correlation and regression of two variables. Chapter 12 extends this discussion to multiple variables and shows how the courts have applied multiple regression methods in cases of alleged discrimination. Chapter 13 is devoted to preventive actions that can be taken to stay out of the courtroom and discusses the challenges that can be made to bad statistics once inside. Chapter 14 prepares the statistician for some of the twists and turns the trial process can take. Chapter 15 describes how an attorney can make the most effective use of statistics and statisticians at various points in the trial process and provides a set of questions on data-related concerns for use during discovery.”

Courtroom Use and Misuse of Mathematics, Physics and Finance: Cases, Lessons and Materials (Carolina Academic Press 2013)

“Numbers are every bit as persuasive and powerful as verbal arguments and presentations. In fact, they can be far more potent, insidious and misleading. Formerly Mathematics, Physics and Finance for the Legal Profession, this new edition catalogs a vast array of scams, schemes and deceptive courtroom presentations that confront lawyers regularly. Nearly every chapter of this book provides information that all lawyers must possess. We live in a scientific world, a digital world — one that is ruled by numbers, equations, formulas and statistics. The topics may seem complex, but the explanations are elementary and, at times, entertaining.”

Freakonomics (HarperCollins Publishers 2009)

“Steven D. Levitt is an economist. Stephen J. Dubner is a writer. They co-authored *Freakonomics*, a book about cheating teachers, bizarre baby names, self-dealing Realtors, and crack-selling mama’s boys. They figured it would sell about 80 copies. Instead, it has sold 4 million, in 35 languages. Then they wrote *SuperFreakonomics*, with stories about drunk walking, the economics of prostitution, and how to stop global warming. It hasn’t quite sold 4 million copies yet but it’s getting there. A lot of other stuff has happened, too. A blog. A radio show. A movie. Lectures. Even Jon Stewart — and *Beauty and the Geek*. This is the place where all that stuff continues to happen. Welcome to Freakonomics.com.”

Math on Trial: How Numbers Get Used and Abused in the Courtroom (Basic Books 2013)

“In the wrong hands, math can be deadly. Even the simplest numbers can become powerful forces when manipulated by politicians or the media, but in the case of the law, your liberty—and your life—can depend on the right calculation. In Math on Trial, mathematicians Leila Schneps and Coralie Colmez describe ten trials spanning from the nineteenth century to today, in which mathematical arguments were used—and disastrously misused—as evidence. They tell the stories of Sally Clark, who was accused of murdering her children by a doctor with a faulty sense of calculation; of nineteenth-century tycoon Hetty Green, whose dispute over her aunt’s will became a signal case in the forensic use of mathematics; and of the case of Amanda Knox, in which a judge’s misunderstanding of probability led him to discount critical evidence—which might have kept her in jail. Offering a fresh angle on cases from the nineteenth-century Dreyfus affair to the murder trial of Dutch nurse Lucia de Berk, Schneps and Colmez show how the improper application of mathematical concepts can mean the difference between walking free and life in prison. A colorful narrative of mathematical abuse, Math on Trial blends courtroom drama, history, and math to show that legal expertise isn’t always enough to prove a person innocent.”

Modern Introduction to Probability and Statistics: Understanding Why and How (Springer 2005)

“In this book you will find the basics of probability theory and statistics. In addition, there are several topics that go somewhat beyond the basics but that ought to be present in an introductory course: simulation, the Poisson process, the law of large numbers, and the central limit theorem. Computers have brought many changes in statistics. In particular, the bootstrap has earned its place. It provides the possibility to derive confidence intervals and perform tests of hypotheses where traditional (normal approximation or large sample) methods are inappropriate. It is a modern useful tool one should learn about, we believe.

Examples and datasets in this book are mostly from real-life situations, at least that is what we looked for in illustrations of the material. Anybody who has inspected datasets with the purpose of using them as elementary examples knows that this is hard: on the one hand, you do not want to boldly state assumptions that are clearly not satisfied; on the other hand, long explanations concerning side issues distract from the main points. We hope that we found a good middle way.”

Numbers Behind NUMB3RS: Solving Crime with Mathematics (Penguin Press 2007)

“The companion to the hit CBS crime series Numb3rs presents the fascinating way mathematics is used to fight real-life crime. Using the popular CBS prime-time TV crime series Numb3rs as a springboard, Keith Devlin (known to millions of NPR listeners as ‘the Math Guy’ on NPR’s Weekend Edition with Scott Simon) and Gary Lorden (the principal math advisor to Numb3rs) explain real-life mathematical techniques used by the FBI and other law enforcement agencies to catch and convict criminals. From forensics to counterterrorism, the Riemann hypothesis to image enhancement, solving murders to beating casinos, Devlin and Lorden present compelling cases that illustrate how advanced mathematics can be used in state-of-the-art criminal investigations.”

Principles and Practices for a Federal Statistical Agency (5th ed. 2013)

“Publicly available statistics from government agencies that are credible, relevant, accurate, and timely are essential for policy makers, individuals, households, businesses, academic institutions, and other organizations to make informed decisions. Even more, the effective operation of a democratic system of government depends on the unhindered flow of statistical information to its citizens. In the United States, federal statistical agencies in cabinet departments and independent agencies are the governmental units whose principal function is to compile, analyze, and disseminate information for such statistical purposes as describing population characteristics and trends, planning and monitoring programs, and conducting research and evaluation. The work of these agencies is coordinated by the U.S. Office of Management and Budget. Statistical agencies may acquire information not only from surveys or censuses of people and organizations, but also from such sources as government administrative records, private-sector datasets, and Internet sources that are judged of suitable quality and relevance for statistical use. They may conduct analyses, but they do not advocate policies or take partisan positions. Statistical purposes for which they provide information relate to descriptions of groups and exclude any interest in or identification of an individual person, institution, or economic unit. Four principles are fundamental for a federal statistical agency: relevance to policy issues, credibility among data users, trust among data providers, and independence from political and other undue external influence. Principles and Practices for a Federal Statistical Agency: Fifth Edition explains these four principles in detail.”

Reference Manual on Scientific Evidence (Fed. Jud. Ctr. 3rd ed. 2011)

“The Reference Manual on Scientific Evidence, Third Edition assists judges in managing cases involving complex scientific and technical evidence by describing the basic tenets of key scientific fields from which legal evidence is typically derived and by providing examples of cases in which that evidence has been used.” See, e.g., Chapters: Reference Guide on Statistics (pp. 211-302); Reference Guide on Multiple Regression (pp. 303-357); Reference Guide on Survey Research (pp. 359-423); Reference Guide on Estimation of Economic Damages (pp. 425-502)

**SCHOLARLY ARTICLES**

Admissibility of ‘Probability Evidence’ in Criminal Trials – Part I, 26 Jurimetrics J. 343 (1986)

“This article examines the confusion that can confound courts in understanding a certain kind of probability (called a P-value) that can be helpful in evaluating the significance of statistical evidence in criminal trials, particularly when it is used as a measure of the probative value of identification evidence – things like bloodstains, hair fibers, fingerprints and forged documents. This article confirms the initial skepticism about the appropriateness of probability calculations in the evaluation of this kind of nonquantitative evidence by examining the early instances of probability evidence in criminal trials, but concludes by arguing that the mathematics of probability has important applications to forensic proof and thus is becoming more common. Some courts oppose this probability evidence completely, while others uncritically admit it, thus exposing the need for a middle road.”

Admissibility of ‘Probability Evidence’ in Criminal Trials – Part II, 27 Jurimetrics J. 160 (1987)

“Part I of this article described some classic cases in which dubious testimony about probabilities helped convict defendants. At the same time, the author suggested that there are circumstances in which “probability evidence” should be admissible, and promised to chart a middle course between Procrustean exclusion of probability evidence and uncritical acceptance of numerobabble. This article discusses a series of related propositions about criminal cases that include “trace evidence” – bloodstains, semen, glass fragments, fibers, or other materials – linking a defendant to a crime. The author maintains that reasonable estimates of pertinent population proportions should be admissible, that argument on the part of counsel as to corresponding estimates of the probability of a coincidental misidentification should be permitted, and that neither expert opinions as to whether the defendant left the trace evidence nor displays of the posterior probability that defendant did so should be admissible.”

Amoral Numbers and Narcotics Sentencing, Val. U. L. Rev. (forthcoming 2013)

“Americans are fascinated with lists and rankings. Magazines catch the eye with covers promising ’92 Cute Summer Looks,’ college football fans anxiously await the release of pre-season rankings, and law schools have reshaped themselves in reaction to the rankings released by U.S. News and World Report. With each of these, though, the lists often do more to create a reality than to reflect one, with distinct negative effects. The same problem plagues federal narcotics sentencing, where rankings of the relative seriousness of crimes are embedded in sentencing guidelines and minimum sentences required by statutes, though they are rooted neither in empirical evidence nor a consistent theory of problem-solving.”

Bayesian Approach to Identification Evidence, 83 Harv. L. Rev. 489 (1970)

“State courts have met little success in analyzing whether prosecutors should be permitted to introduce mathematical statistics to show the weight that should be given to identification evidence. Taking one case as a convenient paradigm, the authors demonstrate that statistics will rarely conclusively identify a defendant. They suggest a mathematical method appropriate where statistical evidence alone is inconclusive. When other incriminating evidence raises a suspicion apart from the statistical evidence, Bayes’ theorem can be applied to indicate the degree that the inconclusive statistical evidence heightens the suspicion. ”

Challenging the Death Penalty With Statistics: Furman, McCleskey, and a Single County Case Study, 34 Cardozo L. Rev. 1227 (2013)

“In the present article, we report on a unique empirical study of the administration of the death penalty in Alameda County, California — the largest single-county death penalty study and the only study to examine intra-county geographic disparities in death-charging and death-sentencing. The data set, drawn from 473 first degree murder convictions for murders occurring over a 23-year period, compares death-charging and death-sentencing in the two halves of the county. During the study period, the two halves differed significantly in racial makeup — the population of North County was over 30% African-American, and of South County less than 5% African-American; and the two halves differed in the race of homicide victims — in North County, African-Americans were homicide victims roughly 4.5 times as often as Whites, while, in South County, Whites were homicide victims more than three times as often as African-Americans. The study reveals that there were statistically significant disparities in death-charging and death-sentencing according to the location of the murder: the Alameda County District Attorney was substantially more likely to seek death, and capital juries, drawn from a county-wide jury pool, were substantially more likely to impose death, for murders that occurred in South County. We argue that, McCleskey notwithstanding, statistical evidence such as the “race of neighborhood” disparities found in the present study should support constitutional challenges to the death penalty under both the Equal Protection Clause and the Eighth Amendment.”

Continuing Debate Over Mathematics in the Law of Evidence: A Comment on “Trial By Mathematics,” 84 Harv L Rev 1801 (1971)

“In an article published earlier in this Volume, Professor Laurence H. Tribe critically examined the use of mathematical techniques in the legal system, including their use at trial, and concluded that such techniques were not only likely to yield inaccurate results, but also threatened a multitude of societal values embodied in the trial process. Michael O. Finkelstein and Professor William B. Fairley, authors of a proposal challenged by Professor Tribe, here respond to defend both their own technique and the use of mathematics at trial generally; Professor Tribe, in turn, replies with a further criticism of mathematical proof. The dialogue formed by these Comments brings out the points of contention and highlights some of the competing values involved, including the advancement of rational decisionmaking in the trial process and the preservation of fundamental notions of respect for the accused in criminal procedure.”

Criminal Investigative Failures: Avoiding the Pitfalls (Part Two), FBI L. Enforcement Bull., Oct. 2006, at 12

“Part one of this article focused on cognitive biases and how they can contribute to criminal investigative failures. Part two presents probability errors and organizational traps that can lead investigations astray. It also offers recommendations and additional strategies that investigators may find helpful.”

Evidence: Admission of Mathematical Probability Statistics Held Erroneous for Want of Demonstration of Validity, 1967 Duke L.J. 665

“In State v. Sneed the New Mexico Supreme Court limited its disapproval of evidence of probability statistics to the particular facts presented but failed to articulate specific safeguards for subsequent use of such evidence. This note explores the nature of probability statistics, their potential utility in a legal context, and criteria by which their admissibility might be determined.”

Evidence, Probability, and the Burden of Proof, Ariz. L. Rev. (forthcoming 2013)

“This Article analyzes the probabilistic and epistemological underpinnings of the burden of proof doctrine. We show that this doctrine is best understood as instructing factfinders to determine which of the parties’ conflicting stories makes most sense in terms of coherence, consilience, causality, and evidential coverage. By applying this method, factfinders should try — and will often succeed — to establish the truth, rather than a statistical surrogate of the truth, while securing the appropriate allocation of the risk of error. Descriptively, we argue that this understanding of the doctrine — the “relative plausibility theory” — corresponds to what our courts actually do. Prescriptively, we argue that the relative-plausibility method is operationally superior to factfinding that relies on mathematical probability. This method aligns with people’s natural reasoning and common sense, avoids paradoxes engendered by mathematical probability, and seamlessly integrates with the rules of substantive law that guide individuals’ primary conduct and determine liabilities and entitlements. We substantiate this claim by juxtaposing the extant doctrine against two recent contributions to evidence theory: Professor Louis Kaplow’s proposal that the burden of proof should be modified to track the statistical distributions of harms and benefits associated with relevant primary activities; and Professor Edward Cheng’s model that calls on factfinders to make their decisions by using numbers instead of words. Specifically, we demonstrate that both models suffer from serious conceptual problems and are not feasible operationally. The extant burden-of–proof doctrine, we conclude, works well and requires no far-reaching reforms.”

Further Critique of Mathematical Proof, 84 Harv. L. Rev. 1810 (1970-1971)

“In a recent article in this *Review*, I [Laurence H. Tribe] undertook to assess the usefulness, limitations, and possible dangers of employing mathematical methods in the legal process, both in the conduct of individual trials, and in the design of procedures for the trial system as a whole. Michael Finkelstein and William Fairley, addressing themselves exclusively to that part of my discussion of the use of mathematical methods in the conduct of trials which criticized their earlier work, reply to several of my criticisms by suggesting that their intentions were far more modest than the methods of mathematical proof I examined. Indeed, if the technique they advocated were as carefully confined as they had evidently intended, some of the problems I discussed would not arise.

Yet their good intentions do not diminish the force of my criticisms. I realized, in writing my article, that by investigating irrational as well as rational uses of theoretically sound methods, I would open myself to the ‘charge that . . . I have confused the avoidable costs of using a tool badly with the inherent costs of using it well.’ As I went on to say, however, ‘the costs of abusing a technique must be reckoned among the costs of using it at all to the extent that the latter creates the risk of the former.”

Illuminating Innumeracy, 63 Case W. Res. 769 (2013)

“Everyone knows that lawyers are bad at math. Many fields of law, though — from explicitly number-focused practices like tax law and bankruptcy, to the less obviously numerical fields of family law and criminal defense — require interaction with, and sophisticated understandings of, numbers. To the extent that lawyers really are bad at math, why is that case? And what, if anything, should be done about it?

In this Article, I [Lisa Milot] show the ways in which our acceptance of innumeracy harms our ability to practice and think about the law. On a practical level, we miscalculate numbers, oversimplify formulas, and, ultimately, misapply mathematical principles. These shortcomings, then, cause us to misunderstand and accept without question the assumptions and biases hiding in the shadows of numerical information, limiting our ability to fully represent our clients. In an effort to being to understand and move beyond the innumeracy present in the law, I distinguish between two types of innumeracy by lawyers: objective innumeracy (or a lack of math competence) and subjective innumeracy (or a lack of math confidence) and suggest that empirical research into the causes of legal innumeracy is needed. I conclude by providing suggestions for beginning to overcome innumeracy in the law in our roles as practitioners, lawmakers, law professors, and law students.”

Interpretation of Statistical Evidence in Criminal Trials: The Prosecutor’s Fallacy and the Defense Attorney’s Fallacy, 11 Law & Hum. Behav. 167 (1987)

“Two experiments tested 217 undergraduates’ ability to use statistical evidence on the incidence rate of a “matching” characteristic when judging the probable guilt of a criminal suspect, based on written descriptions of evidence. Exp I varied whether incidence rate statistics were presented as conditional probabilities or as percentages and found the former promoted inferential errors favoring the prosecution, while the latter produced more errors favoring the defense. Exp II exposed Ss to 2 fallacious arguments on how to interpret the statistical evidence. The majority of Ss failed to detect the error in one or both of the arguments. In both experiments, a comparison of Ss’ judgments to Bayesian norms revealed a general tendency to underutilize statistical evidence.”

Likelihoodism, Bayesianism, and a Pair of Shoes, 53 Jurimetrics J. 1 (2012)

“In R v. T, [2010] EWCA (Crim) 2439, a footwear analyst followed recommendations of the Forensic Science Service in testifying to the weight of the evidence according to a standardized table for characterizing likelihood ratios, reporting that the evidence established “a moderate degree of scientific evidence to support the view that the [Nike trainers recovered from the appellant] had made the footwear marks” in question. The Court of Appeal for England and Wales offered a variety of reasons for holding that this testimony should not have been received. Although the opinion can and should be read narrowly, the apparent preference for traditional opinion testimony about the source of such trace evidence is unfortunate. This essay adds to previous criticism of that aspect of the opinion by distinguishing between likelihood and Bayesian theories of inference. It argues that courts should receptive to the efforts of forensic analysts, guided by either of these theories, to avoid source attributions and to direct their testimony to the strength of the evidence with respect to competing hypotheses.”

Model of Criminal Process: Game Theory and Law, 56 Cornell L. Rev. 57 (1970-1971)

“In this article I [Robert L. Birmingham] attempt to use the techniques of game theory to isolate minimal attributes of problems familiar in criminal law. In the first section I construct a model indicating the impact of the criminal law on the actions of prospective malefactors and, by extension, on the welfare of society as a whole; in subsequent sections I explore the implications of this model for public policy. My hope is to achieve clarity through abstractness.”

Numeracy and Legal Decision Making, Ariz. St. L.J. (forthcoming 2014)

“Health and financial decision-making literatures have chronicled how people’s numerical abilities affect their decisions. Until now, however, there has been no empirical study of whether numeracy — or people’s ability to understand and use numbers — also interacts with legal decision making. This Article presents the first such study, and describes three original findings related to the role of numeracy in legal decision making. First, the study shows a surprisingly high level of math skill among law students, especially given the common folk wisdom that lawyers are bad at math. Second, although prior research in non-legal contexts has shown that people with low numeracy are particularly susceptible to cognitive bias, we detect no significant relationship between law students’ math skills and their susceptibility to bias or framing effects. Finally, and most worryingly, our findings show that the substance of legal analysis varies with math skill for at least some subset of cases.”

People v. Nelson: A Tale of Two Statistics, 7 Law Probability & Risk 249 (2008)

“In recent years, defendants who were identified as a result of a search through a database of DNA profiles have argued that the probability that a randomly selected person would match a crime-scene stain overstates the probative value of the match. The statistical literature is divided, with most statisticians who have written on the subject rejecting this claim. In People v. Nelson, [43 Cal.4th 1242 (Sup. Ct. 2008)] the Supreme Court of California held that when the random-match probability is so small as to make it exceedingly unlikely that any unrelated individual has the incriminating DNA profile, this statistic is admissible in a database-search case. In dicta, the court suggested that the defendant might be permitted to introduce an inflated match probability to counter the prosecution’s statistic. This Comment describes the statistical issue, questions some of the reasoning in Nelson, and suggests other approaches that a defendant might take in response to a cold hit in the database.”

Practical Application of Probability in Court: Advancing Science or Timeless Art?, SSRN, 2013

“The application of probability to issues of proof and evidence is now a well-developed academic field, forming as it does a central part of the ‘science of evidence’. But how far has that science developed in the practical setting of the court room? Starting from its roots and reviewing its progress to the present day, it is argued that while the science which supports statistical evidence has become ever more sophisticated, almost nothing has changed in English courts so far as what happens during the process of a trial. In this respect, the formal court dress worn by English advocates, largely unchanged since 1685, is emblematic of the lack of progress of probability theory since its inception.”

Primer for the Nonmathematically Inclined on Mathematical Evidence in Criminal Cases: People v. Collins and Beyond, 47Wash. & Lee L. Rev. 741 (1990)

“The article attempts to accomplish these goals in four parts. Part I provides some basic building blocks necessary to initially understand the topic. In Part II, the article re-examines the landmark mathematical evidence case, People v. Collins, [68 Cal. 2d 319 (Sup. Ct. 1968)] which must be understood completely because it marks the real beginning of the intersection between the law and mathematics, and because it continues to be considered by evidence texts, courts, and scholars as the preeminent case on the subject. Next, in Part III, the article gives an overview of the development of scholarship and case law since Collins. The article culminates in Part IV by analyzing the five kinds of mathematical evidence that arise in criminal cases and applying the scholarship, case law, and social science research which address their appropriateness as proof.”

Probability Analysis of Judicial Fact-Finding: A Preliminary Outline of the Subjective Approach, 1 U. Tol. L. Rev. 538 (1969)

“Mathematical probability is often regarded as an abstract theory of little practical value in analyzing courtroom decisionmaking. Professor Cullison suggests, however, that a comprehensive probabilistic analysis of fact-finding can be extremely useful. After discussing basic probability concepts, he outlines an extensive probabilistic model for analyzing the judicial factfinding process. Application of this model will ultimately be beneficial to a practitioner preparing his case for trial as well as to an academician suggesting procedural or substantive reform in our legal system.”

Probability, Individualization, and Uniqueness in Forensic Science Evidence: Listening to the Academies, 75 Brook. L. Rev. 1163 (2010)

“Day in and day out, criminalists testify to positive, uniquely specific identifications of fingerprints, bullets, handwriting, and other trace evidence. A committee of the National Academy of Sciences, building on the writing of academic commentators, has called for sweeping changes in the presentation and production of evidence of identification. These include some form of circumscribed and standardized testimony. But the Academy report is short on the specifics of the testimony that would be legally and professionally allowable. This essay outlines possible types of testimony that might harmonize the testimony of criminalists with the actual state of forensic science. It does so through a critical analysis of the arguments and proposals of two critics of ‘individualization’ testimony in forensic science. By clarifying the relationship between uniqueness and individualization, the essay advances a slightly less skeptical view of individualization than that expounded by Professors Michael Saks and Jay Koehler. Among other things, the essay argues that there is no rule of probability, logic, or ontology that prevents individualization and that testimony of uniqueness or individualization is scientifically acceptable in some situations. Recognizing that these situations are unusual, however, it also surveys some evidentiary rules and practices that could curb the excesses of the current form of testimony.”

Scientifically Trained Law Clerk: Legal and Ethical Considerations of Relying on Extra-Record Technical Training or Experience, ExpressO (2013)

“Technically trained law clerks should be permitted to rely on extralegal scientific principles, but only if those principles are objectively verifiable and not subject to reasonable dispute—a standard that matches Federal Rule of Evidence 201. By contrast, law clerks should not rely on extralegal scientific principles that are not objectively verifiable or beyond reasonable dispute. Technical training is particularly useful for law clerks at the Federal Circuit Court of Appeals because of its focus on patent cases. Technically trained clerks could also be useful at the trial level because the district courts recently began a ten-year Patent Pilot Program.”

Trial by Mathematics: Precision and Ritual in the Legal Process. 84 Harv. L. Rev. 1329 (1971)

“Professor Tribe considers the accuracy, appropriateness, and possible dangers of utilizing mathematical methods in the legal process, first in the actual conduct of civil and criminal trials, and then in designing procedures for the trial system as a whole. He concludes that the utility of mathematical methods for these purposes has been greatly exaggerated. Even if mathematical techniques could significantly enhance the accuracy of the trial process, Professor Tribe also shows that their inherent conflict with other important values would be too great to allow their general use.”Trial by Mathematics – Reconsidered, 10 Law, Probability and Risk 167 (2011)

“Putting aside the special (and comparatively trivial) case of mathematical and formal methods that make their appearance in legal settings because they are accoutrements of admissible forensic scientific evidence, I [Peter Tillers] propose that discussants, researchers, and scholars of every stripe begin by carefully considering the possibility that mathematical and formal analysis of inconclusive argument about uncertain factual questions in legal proceedings could have any one (or more) of the following distinct purposes: 1. To predict how judges and jurors will resolve factual issues in litigation. 2. To devise methods that can replace existing methods of argument and deliberation in legal settings about factual issues. 3. To devise methods that mimic conventional methods of argument about factual issues in legal settings. 4. To devise methods that support or facilitate existing, or ordinary, argument and deliberation about factual issues in legal settings by legal actors (such as judges, lawyers, and jurors) who are generally illiterate in mathematical and formal analysis and argument. 5. To devise methods that would capture some but not all ingredients of argument in legal settings about factual questions questions. 6. To devise methods that perfect – that better express, that increase the transparency of – the logic or logics that are immanent, or present, in existing ordinary inconclusive reasoning about uncertain factual hypotheses that arise in legal settings. 7. To devise methods that have no practical purpose – and whose validity cannot be empirically tested – but that serve only to advance understanding – possibly contemplative understanding – of the nature of inconclusive argument about uncertain factual hypotheses in legal settings.”

Use of Statistics in Criminalistics. 55 J. Crim. L., Criminology, and Police Sci. 514 (1964)

“This paper will present a relatively non-technical discussion of some aspects of the utilization of statistics in the field of criminalistics. A general concept of probability will be examined to see what relationship it might bear to the interpretative areas of criminalistics. Some specific conditions imposed by the nature of the statistical process will be outlined and then illustrated with some previously published data. It is hoped that this presentation will help to orient the reader toward a better understanding of the proper role of statistical methods in criminalistics.” CURRENT AWARENESS

Forensic Mathematics (Charles H. Brenner, Ph.D.)

“Forensic Mathematics… is the best short description that I have found to describe the work that I do, which mostly pertains to DNA identification, and includes consulting, writing software – DNA*VIEW is used by some 100 laboratories, in every continent (except Antarctica) – academic activities in mathematics, biostatistics, and various aspects of population genetics.”

The Math Guy (NPR)

This is the archive of National Public Radio broadcasts by Dr. Keith Devlin, Professor of Mathematics at Stanford University.

The Numbers Guy (Wall St. J.)

“Carl Bialik examines the way numbers are used, and abused. The Numbers Guy examines numbers in the news, business and politics. Some numbers are flat-out wrong or biased, while others are valid and help us make informed decisions. Carl Bialik tells the stories behind the stats, in occasional updates on this blog and in his column published every Saturday in The Wall Street Journal. Carl, who holds a degree in mathematics and physics from Yale University, also writes daily about sports numbers on WSJ.com.”

**AMERICAN LAW REPORTS**

__Admissibility, in Criminal Case, of Statistical or Mathematical Evidence Offered for Purpose of Showing Probabilities__, 36 A.L.R.3d 1194

“This annotation collects the criminal cases in which the courts have ruled on the admissibility of statistical or mathematical evidence offered to show the probability that the defendant was, or was not, the person who committed the alleged crime, or that he was, or was not, connected with the criminal act in some way.”

__Admissibility and Weight of Surveys or Polls of Public or Consumers’ Opinion, Recognition, Preference, or the Like__, 76 A.L.R.2d 619

“The annotation has been confined to surveys or polls of members of the public generally or of consumers of particular goods or services, thus excluding such matters as purely statistical compilations, samplings of public utility property to determine the reproduction cost thereof, scientific tests or samplings, sampling of goods for the purpose of determining quality or condition, and ‘cost surveys’ for a particular trade or industry to determine the cost of doing business in such trade or industry.”

++++++

1 See Probability and Statistics, Encyclopedia Britannica (last viewed Nov. 26, 2013)(“probability and statistics, the branches of mathematics concerned with the laws governing random events, including the collection, analysis, interpretation, and display of numerical data. Probability has its origin in the study of gambling and insurance in the 17th century, and it is now an indispensable tool of both social and natural sciences. Statistics may be said to have its origin in census counts taken thousands of years ago; as a distinct scientific discipline, however, it was developed in the early 19th century as the study of populations, economies, and moral actions and later in that century as the mathematical tool for analyzing such numbers.”); Bayes’ Theorem, Stanford Encyclopedia of Philosophy, June 28, 2003 (“Bayes’ Theorem is a simple mathematical formula used for calculating conditional probabilities. It figures prominently in subjectivist or Bayesian approaches to epistemology, statistics, and inductive logic.”).

2 See generally McCormick on Evidence (7th ed 2013)(sects. 208 Surveys and Opinions Polls; 209 Correlations and Causes: Statistical Evidence of Discrimination; 210 Identification Evidence, Generally; 211 Paternity Testing); Reference Manual on Scientific Evidence (3rd ed. 2011); __Admissibility, in Criminal Case, of Statistical or Mathematical Evidence Offered for Purpose of Showing Probabilities__, 36 A.L.R.3d 1194.

3 See, e.g., Dmitri Krioukov, The Proof of Innocence, Annals of Improbable Research, v.18, n.4, p.12 (2012)(“We show that if a car stops at a stop sign, an observer, e.g., a police officer, located at a certain distance perpendicular to the car trajectory, must have an illusion that the car does not stop, if the following three conditions are satisfied: (1) the observer measures not the linear but angular speed of the car; (2) the car decelerates and subsequently accelerates relatively fast; and (3) there is a short-time obstruction of the observer’s view of the car by an external object, e.g., another car, at the moment when both cars are near the stop sign.”); Yves van Gennip et al., Community Detection Using Spectral Clustering on Sparse Geosocial Data, SIAM J. Appl. Math., 73(1), 67 (2013)(“In this article we identify social communities among gang members in the Hollenbeck policing district in Los Angeles, based on sparse observations of a combination of social interactions and geographic locations of the individuals. This information, coming from Los Angeles Police Department (LAPD) Field Interview cards, is used to construct a similarity graph for the individuals. We use spectral clustering to identify clusters in the graph, corresponding to communities in Hollenbeck, and compare these with the LAPD’s knowledge of the individuals’ gang membership. We discuss different ways of encoding the geosocial information using a graph structure and the influence on the resulting clusterings. Finally we analyze the robustness of this technique with respect to noisy and incomplete data, thereby providing suggestions about the relative importance of quantity versus quality of collected data.”).

4 See, e.g., Commonwealth v. Ferreira, 460 Mass. 781, 787-788 (Mass. Sup. Jud. Ct. 2011)(“The prosecutor also erred in equating proof beyond a reasonable doubt with a numerical percentage of the probability of guilt, in this case, ninety-eight per cent. “[T]o attempt to quantify proof beyond a reasonable doubt changes the nature of the legal concept of `beyond a reasonable doubt,’ which seeks `abiding conviction’ or `moral certainty’ rather than statistical probability.” Commonwealth v. Rosa, 422 Mass. 18, 28 (1996). “The idea of reasonable doubt is not susceptible to quantification; it is inherently qualitative.” Commonwealth v. Sullivan, 20 Mass. App. Ct. 802, 806 (1985). See Commonwealth v. Mack, 423 Mass. 288, 291 (1996) (“the concept of reasonable doubt is not a mathematical one”).”). See generally Ken Strutin, Forensic Due Process: Lawyering With Science, N.Y.L.J., March 20, 2012, at 5 (“Summation was no substitute for expert testimony that could be tested by the defense and scrutinized by the court. The use of unvetted statistical statements inflated the credibility of the victim’s identification, reducing proof beyond a reasonable doubt to mathematical bolstering.” (footnote omitted)); D. Kim Rossmo, Criminal Investigative Failures: Avoiding the Pitfalls (Part Two), FBI L. Enforcement Bull., Oct. 2006, at 12 (“Part two presents probability errors and organizational traps that can lead investigations astray. It also offers recommendations and additional strategies that investigators may find helpful.”).

5 See, e.g., Jackson v. Pollion, 2013 U.S. App. LEXIS 21983 (7th Cir. Ill. Oct. 28, 2013)(“The discomfort of the legal profession, including the judiciary, with science and technology is not a new phenomenon. Innumerable are the lawyers who explain that they picked law over a technical field because they have a “math block”—”law students as a group, seem peculiarly averse to math and science.” David L. Faigman, et al., Modern Scientific Evidence: Standards, Statistics, and Research Methods v (2008 student ed.). But it’s increasingly concerning, because of the extraordinary rate of scientific and other technological advances that figure increasingly in litigation.”); Angela Saini, A Formula for Justice, The Guardian, Oct. 2, 2011(“Bayes’ theorem is a mathematical equation used in court cases to analyse statistical evidence. But a judge has ruled it can no longer be used. Will it result in more miscarriages of justice?”). Harold E. Potts, Letter to the Editor: An Application of Mathematics to Law, Nature, Vol. 91, Issue 2269, pp. 187-189 (15 May 1913)(Responding to criticism about the author’s article concerning the role of math in the adjudication of patent claims.).

6 See David McCord, Primer for the Nonmathematically Inclined on Mathematical Evidence in Criminal Cases: People v. Collins and Beyond, 47Wash. & Lee L. Rev. 741 (1990) (“Statistics are clearly “facts” and thus there is no difficulty in characterizing them as “evidence.” Probabilities are more of a way of thinking about the significance of evidence and thus more easily characterizable as “arguments” than as “evidence.” Nonetheless, most courts and commentators have characterized probabilities as “evidence” and thus we will consider probabilities as “evidence” for purposes of this Article as well.” Id. at 758 n. 59.).

7 Id. at 758 (“It consists of use of statistics based on empirical sampled data to eliminate possible culprits, while simultaneously not eliminating the defendant as the culprit. The key characteristic of this category is that it involves no probability calculation (although it provides data upon which the second category probabilities of a random match-can readily be based).”).

8 Id. (“Any empirical statistic can be used to form such a probability.”).

9 Id. at 759 (“In this category, empirical statistics form part of the basis for formation of a subjective probability, but not via use of Bayes’ Theorem (e.g., “Based upon my assessment of the probability of truth of the eyewitness testimony plus the statistics regarding blood factors, I believe that the defendant’s guilt is extremely probable.”)”).

10 Id. (“This category is composed of instances where the prosecutor, in line with the mathematical probabilist tenet that all evidence is inherently probabilistic, probabilizes non-empirically sampled data, and then further relies on the probabilist position that the burden of persuasion is probabilistic by arguing that the subjective probability of guilt based upon the non-empirically sampled data is sufficient to convict, all without using Bayes’ Theorem.”).

11 Id. (“[E.]g., “Based upon my evaluation of the probability of truth of the eyewitness testimony, as combined with the blood factor evidence’s probability via Bayes’ Theorem, I believe that defendant’s guilt is extremely probable.””).

12 See Leila Schneps and Coralie Colmez, Math on Trial: How Numbers Get Used and Abused in the Courtroom (Basic Books 2013).

13 Leila Schneps and Coralie Colmez, Justice Flunks Math, N.Y. Times, Mar. 27, 2013, at A23.