Strengthening Forensic Science: The Next Wave of Scholarship

The National Academy of Sciences report, Strengthening Forensic Science in the United States: A Path Forward [NAS Report], is the most important, recent contribution to the ongoing reevaluation of forensic evidence. Since the release of the prepublication version in February 2009, its findings and conclusions have been steadily sinking into the collective consciousness of the legal and scientific communities.1

This article focuses on threads of scholarly literature citing and commenting on the NAS Report; and highlights discussions where experts and practitioners rethink the merits of a wide range of forensic issues.2 And on the horizon is the Third Edition of the Reference Manual on Scientific Evidence, which will have its own impact on legal thinking about science in the courtroom.


Strengthening Forensic Science in the United States: A Path Forward (National Research Council 2009)
“In this report, The National Academy of Sciences’ Committee on Identifying the Needs of the Forensic Science Community fulfills the congressional charge of providing recommendations on policy initiatives that must be adopted in any plan to improve the forensic science disciplines and to allow the forensic science community to serve society more effectively. The committee reached a consensus on the most important issues now facing the forensic science community and medical examiner system, producing 13 recommendations to address these issues. The recommendations are intended to address the following deficiencies in the forensic science enterprise in the United States: underresourcing that has created laboratory backlogs and undermined the quality of the work done; lack of a unified strategy for developing a forensic science research plan across Federal agencies; and multiple types of practitioners with different levels of education, training, professional cultures, and standards for performance. The fragmented nature of forensic science in America increases the likelihood that the quality and interpretation of evidence presented in court will vary unpredictably among jurisdictions. The committee’s key recommendation for addressing these deficiencies is for Congress to establish and appropriate funds for an independent Federal entity, the National Institute of Forensic Science (NIFS). This federally funded independent body will oversee and direct the forensic disciplines in the Nation. The other recommendations in this report are linked to the creation of the NIFS; however, even if the creation of the NIFS is impeded, the core ideas and principles in each of the other recommendations should be pursued. They pertain to standardized terminology and reporting; more and better research; best practices and standards; quality control, assurance, and improvement; codes of ethics; improved education and training; the medicolegal death investigation system; automated fingerprint identification system interoperability; and the linking of forensic science disciplines to homeland security. A subject index and appended committee meeting agendas and biographical information for committee members and staff.”

Committee on the Development of the Third Edition of the Reference Manual on Scientific Evidence (FJC 2009)
” At the request of the Federal Judicial Center (FJC), and in collaboration with the FJC, CSTL [Committee on Science, Technology, and Law] will develop the third edition of the Reference Manual on Scientific Evidence. The Reference Manual assists judges in managing cases involving complex scientific and technical evidence by describing the basic tenets of key scientific fields from which legal evidence is typically derived and providing examples of cases in which that evidence has been used. The development of the third edition will follow the basic structure of the current edition, but will include, in addition to updating, new topics and annotated case citations. An ad hoc committee under the auspices of CSTL will take the lead in producing the reference manual, i.e., developing, approving, publishing, and disseminating the Reference Manual. This work will be guided by the committee procedures of the National Research Council, and the final text will undergo review through the Academies’ formal review process. CSTL also will organize a one-day symposium at which the new edition will be released to the legal and scientific communities. The Third Edition of the Reference Manual on Scientific Evidence will be issued in late 2010.”


Can Bad Science Be Good Evidence: Lie Detection, Neuroscience, and the Mistaken Conflation of Legal and Scientific Norms, Cornell Law Review, Forthcoming
“As the capabilities of cognitive neuroscience, in particular functional magnetic resonance imaging (fMRI) ‘brain scans,’ have become more advanced, some have claimed that fMRI-based lie-detection can and should be used at trials and for other forensic purposes to determine whether witnesses and others are telling the truth. Although some neuroscientists have promoted such claims, most aggressively resist them, and arguing that the research on neuroscience-based lie-detection is deeply flawed in numerous ways. And so these neuroscientists have resisted any attempt to use such methods in litigation, insisting that poor science has no place in the law. But although the existing studies have serious problems of validity when measured by the standards of science, and true as well that the reliability of such methods is significantly lower than their advocates claim, it is nevertheless an error to assume that the distinction between good and bad science, whether as a matter of validity or of reliability, is dispositive for law. Law is not only about putting criminals in jail, and numerous uses of evidence in various contexts in the legal system require a degree of probative value far short of proof beyond a reasonable doubt. And because legal and scientific norms, standards, and goals are different, good science may still not be good enough for some legal purposes, and, conversely, some examples of bad science my, in some contexts, still be good enough for law. Indeed, the exclusion of substandard science, when measured by scientific standards, may have the perverse effect of lowering the accuracy and rigor of legal fact-finding, because the exclusion of flawed science will only increase the importance of the even more flawed non-science that now dominates legal fact-finding. And thus the example of neuroscience-based lie detection, while timely and important in its own right, is even more valuable as a case study suggesting that Daubert v. Merrill-Dow Pharmaceuticals may have sent the legal system down a false path. By inappropriately importing scientific standards into legal decision-making with little modification, Daubert confused the goals of science with those of law, a mistake that it is not too late for the courts to correct.”

Computer-Assisted Handwriting Analysis: Interaction with Legal Issues in U.S. Courts, 57 Computational Forensics (No. 18) 137 (Aug. 2009)

“Advances in the development of computer-assisted handwriting analysis have led to the consideration of a computational system by courts in the United States. Computer-assisted handwriting analysis has been introduced in the context of Frye or Daubert hearings conducted to determine the admissibility of handwriting testimony by questioned document examiners, as expert witnesses, in civil and criminal proceedings. This paper provides a comparison of scientific and judicial methods, and examines concerns over reliability of handwriting analysis expressed in judicial decisions. Recently, the National Research Council assessed that ‘the scientific basis for handwriting comparisons needs to be strengthened’. Recent studies involving computer-assisted handwriting analysis are reviewed in light of the concerns expressed by the judiciary and National Research Council. A future potential role for computer-assisted handwriting analysis in the courts is identified.”

Cross-Examination: Seemingly Ubiquitous, Purportedly Omnipotent, and ‘At Risk’, 14 Widener L. Rev. 427 (2009)
“Cross-examination is viewed as a core aspect of the trial process, both criminal and civil, and its use and purported power are omnipresent in the American adjudicative system. Indeed, this role is confirmed in the abundance of literature (both fictional and educational) involving cross-examination, and its increasing prominence in the law school curriculum. This article confirms the exalted status cross-examination has achieved and arguably retains in the American trial and fact-finding process, while simultaneously identifying its frailties: its ineffectiveness as a truth-discerning tool in varying contexts; trends in constitutional law that will eliminate the requirement of cross-examination for expanding categories of witnesses; and the impact of technology and popular media on the learning processes and expectations of jurors. Particularly because of the transformation of hearsay law and the continuing trend toward visual rather than aural learning and knowledge accumulation, cross-examination may play a reduced role in the trial process and its form may need to be reinvented.”

How Scientific Is Forensic Evidence?, Champion, Aug. 2009, at 36
“The Champion assembled a panel to discuss the issues raised in the [NAS] report. Our panelists are Adina Schwartz, a professor at John Jay College of Criminal Justice and The Graduate Center, City University of New York; William C. Thompson, a professor at the University of California — Irvine; Michael Burt, a death penalty resource counsel in San Francisco; and the Honorable Jed S. Rakoff, a U.S. District Judge in the Southern District of New York. While these issues have been discussed for years on an informal basis, perhaps the assessment by the prestigious National Academy of Sciences will result in positive change. Congress has already started to hold hearings in order to determine next steps.”

Identification, Individualization and Uniqueness: What’s the Difference?, Law, Probability and Risk Advance (Published online July 7, 2009)
“Criminalists and many forensic scientists concerned with the identification of trace evidence have distinguished between identification and individualization, but they have not distinguished as precisely between individualization and uniqueness. This paper clarifies these terms and discusses the relationships among identification, individualization and uniqueness in forensic science evidence.”

Invalid Forensic Science Testimony and Wrongful Convictions, 95 Va. L. Rev. 1 (2009)
“This is the first study to explore the forensic science testimony by prosecution experts in the trials of innocent persons, all convicted of serious crimes, who were later exonerated by post-conviction DNA testing. Trial transcripts were sought for all 156 exonerees identified as having trial testimony by forensic analysts, of which 137 were located and reviewed. These trials most commonly included testimony concerning serological analysis and microscopic hair comparison, but some included bite mark, shoe print, soil, fiber, and fingerprint comparisons, and several included DNA testing. This study found that in the bulk of these trials of innocent defendants – 82 cases or 60% – forensic analysts called by the prosecution provided invalid testimony at trial – that is, testimony with conclusions misstating empirical data or wholly unsupported by empirical data. This was not the testimony of a mere handful of analysts: this set of trials included invalid testimony by 72 forensic analysts called by the prosecution and employed by 52 laboratories, practices, or hospitals from 25 states. Unfortunately, the adversarial process largely failed to police this invalid testimony. Defense counsel rarely cross-examined analysts concerning invalid testimony and rarely obtained experts of their own. In the few cases in which invalid forensic science was challenged, judges seldom provided relief. This evidence supports efforts to create scientific oversight mechanisms for reviewing forensic testimony and to develop clear scientific standards for written reports and testimony. The scientific community can through an official government entity promulgate standards to ensure the valid presentation of forensic science in criminal cases and thus the integrity and fairness of the criminal process.”

‘It’s Just a Shot Away’: MMR Vaccines and Autism and the End of the Daubertista Revolution, 35 Wm. Mitchell L. Rev. 1511 (2009)
“This Article predicts the demise of the Daubertista Revolution in science and law. Using the example of the February 2009 Federal Vaccine Court decisions that thoroughly and specifically debunk the belief that childhood measles, mumps, rubella (‘MMR’) vaccines cause autism, the Article posits that Daubertista hegemony over the field of science and law will and should end because we consistently ignore non-evidence legal cases even when courts engage in sophisticated analyses of vital and complex scientific questions. Other glaring examples discussed in the Article include our almost complete disinterest in science and law in the context of Intelligent Design Theory cases including the groundbreaking science-centered decision in Kitzmiller v. Dover Area School District. Many cases that do not involve the admissibility of scientific evidence provide insight into how law uses and misuses science and create new (currently neglected) opportunities explore and predict shifts in social and judicial attitudes and behavior. The Article acknowledges that many Daubertistas have made significant contributions to the field, but proposes a new, more inclusive, transdisciplinary approach to both the study and practice of science-based legal controversies. This new approach would begin outside the courthouse with more explicit recognition of shared systemic social obstacles to sound science-based legal decisions in all fields such as: (1) growing American scientific illiteracy; (2) balanced media coverage of all so-called scientific ‘controversies’ from evolution to alien abduction; and (3) effective efforts to exaggerate scientific uncertainty or distort scientific claims to advance theistic, normative, political, or pecuniary objectives. By starting the analysis outside the courthouse, the Article more effectively links traditional Daubertista concerns of evidentiary quality control to more global questions of science, law and society.”

Law’s Looking Glass: Expert Identification Evidence Derived from Photographic and Video Images, Current Issues in Criminal Justice, Vol. 20, No. 3, March 2009
“This article offers a critical overview of expert identification evidence based on images. It reviews the Australian case law and then, in an interdisciplinary manner, endeavors to explain methodological, technical and theoretical problems with facial mapping evidence. It suggests that extant admissibility jurisprudence and traditional safeguards associated with expert opinion evidence and the adversarial trial might not adequately protect those accused of committing criminal acts when they are confronted with incriminating expert identification evidence.”

Limits of Social Framework Evidence, Law, Probability and Risk Advance (Published online on August 21, 2009)
“An important debate is brewing over the proper scope of expert witness testimony that purports to summarize general social science evidence to provide context for the factfinder to decide case-specific questions. In a recent article, we argued that experts who provide this ‘social framework’ testimony should be restricted from making any linkages between general social science research findings and specific case questions unless such linkages are supported by scientifically reliable methods and principles. A response to our article by Professors Hart and Secunda argued that experts should be given much more latitude to match the facts of a particular case to findings from social science research. In this paper, we show that Hart and Secunda mischaracterized our arguments for restricting the scope of social framework evidence, ignored the actual practices of experts who are providing the expert opinions that we question, and misconstrued the motivations behind and likely implications of our arguments. We encourage courts to reject the approach that Hart and Secunda advocate, which would permit both plaintiff and defense experts to link general research to specific cases through nothing other than the expert’s subjective judgments and intuition.”

NAS Report: An Evidence Professor’s Perspective, It’s Evident, (National Clearinghouse for Science, Technology and the Law), July 2009
“The focus of the National Academy of Sciences’ February, 2009 report ‘Strengthening Forensic Science in the United States’ was global – to call for systemic improvements to the forensic disciplines and sciences, with emphasis (inter alia) on the research needed to validate expert claims of individualization and identity. In doing so, however, the report called into question the degree of certainty testified to by practitioners of ‘soft’ forensic disciplines, the subjective pattern matching of fingerprints, ballistics, handwriting, tool marks, and tire and shoe print treads. In particular, the Report found an across-the-board inability to validate claims that a correspondence of features between crime scene evidence and a known (e.g., between a latent print left at a burglary and the print of a suspect) proves that the suspect was the sole possible contributor. This Article examines the consequences of the Report in the courtroom, identifying the legal issues that will arise in Frye and Daubert jurisdictions as litigants challenge the admissibility or scope of forensic ‘expert’ testimony.”

NAS Report on Forensic Science: A Forensic Scientist’s Response (Brent E. Turvey, MS1 Crime Reconstruction, Published Feb. 25, 2009)
“Forensic scientists come in many forms, and their numbers include many examiners who do not work in crime labs. They also lack uniform standards in education and methodology; their conclusions often lack scientific rigor and are overly confident; and they are too often marked by improper alignment with law enforcement and prosecutorial agencies. As consequence, the forensic science community is fragmented and broken, cannot identify let alone fix its own problems, and does not speak with a single voice about what is best for its future. Moreover, it has proven incapable of holding itself accountable for anything that it does. Such are the findings in the recently published report by the National Academy of Science (NAS), Strengthening Forensic Science in the United States: A Path Forward (Edwards and Gotsonis, 2009). Subsequently, it falls to those of us who are relatively free to respond of their own accord, without political affiliation, censure, or fear of reprisal, to do so. This commentary is prepared in that spirit.”

NAS Report on Forensic Science: A Glass Nine-Tenths Full (This is About the Other Tenth), (SSRN 2009)
“The NAS Committee Report, Strengthening Forensic Science In The United States, issued in February of 2009, was a milestone in the decades-long struggle to get those who control the production and utilization of forensic science expertise to admit the various weaknesses of some of the techniques involved, and to take steps to strengthen the reliability of those techniques and their products. The NAS Committee Report is in some ways the culmination of those efforts, and has made it now untenable to dismiss criticisms as simply the cavils of uninformed academics with nothing better to do. In this sense the report is a glass nine-tenths full, and is to be celebrated as such. But then there is the other tenth, the tenth that may, as an unintended consequence, delay needed reform significantly and unnecessarily. The most significant part of this unwise tenth is the decision not to push strongly for the immediate adoption of masking and sequential unmasking protocols in forensic science practice, but instead to call for “more research” on the issue in advance of moving forward. This paper explains in detail why the ‘await more research’ approach is misguided.”

Next Innocence Project: Shaken Baby Syndrome and the Criminal Courts, 87 Wash. U. L. Rev. 1 (2009)

“Every year in this country, hundreds of people are convicted of having shaken a baby, most often to death. In a prosecution paradigm without precedent, expert medical testimony is used to establish that a crime occurred, that the defendant caused the infant’s death by shaking, and that the shaking was sufficiently forceful to constitute depraved indifference to human life. Shaken Baby Syndrome (SBS) is, in essence, a medical diagnosis of murder, one based solely on the presence of a diagnostic triad: retinal bleeding, bleeding in the protective layer of the brain, and brain swelling.
New scientific research has cast doubt on the forensic significance of this triad, thereby undermining the foundations of thousands of SBS convictions. Outside the United States, this scientific evolution has prompted systemic reevaluations of the prosecutorial paradigm. In contrast, our criminal justice system has failed to absorb the latest scientific knowledge. This is beginning to change, yet the response has been halting and inconsistent. To this day, triad-based convictions continue to be affirmed, and new prosecutions commenced, as a matter of course.
This Article identifies a criminal justice crisis and begins a conversation about its proper resolution. The conceptual implications of the inquiry—for scientific engagement in law’s shadow, for future systemic reform, and for our understanding of innocence in a post-DNA world—should assist in the task of righting past wrongs and averting further injustice.”

Place of Forensic Voice Comparison in the Ongoing Paradigm Shift, The 2nd International Conference on Evidence Law and Forensic Science Conference Thesis (Vol. 1, pp. 20–34, 2009)
“We are in the midst of what Saks & Koehler (2005) have called a paradigm shift in the evaluation of evidence in the forensic comparison sciences. This is a shift to requiring that the evaluation of forensic evidence actually be scientific, including that the reliability of methodologies be testable, and requiring that forensic evidence be evaluated and presented to the courts in a logically correct manner. Reliability was a primary concern in the US Supreme Court’s Daubert decision in 1993. The logically correct evaluation and presentation of the weight of forensic evidence was a primary concern in a number of court cases in the United Kingdom including the Appeal Court of England and Wales’ 1996 ruling on the presentation of DNA evidence in R v Doheny & Adams. In the US National Research Council’s report to Congress released in February 2009 current practice in the evaluation of nuclear DNA evidence is held up as a model to emulate, and current practices in other branches of forensic science are subject to sometimes severe criticism. In the present paper I examine the place of forensic voice comparison in the ongoing paradigm shift. Over the last decade a small number of forensic-voice-comparison researchers have been working in the post-shift paradigm. They have adopted the likelihood-ratio framework for the evaluation of forensic evidence, the same framework as used in DNA analysis. I provide a brief description of the likelihood-ratio framework, followed by a brief history of the adoption of the likelihood-ratio framework for forensic voice comparison by the research community, and by the forensic practitioner, law-enforcement, and judicial communities.”

Reforming Eyewitness Identification Law and Practices to Protect the Innocent, 42 Creighton L. Rev. 595 (2009)
“This article discusses varying eyewitness identification reform proposals that may help to finally achieve a greater level of reliability in this critical phase of the criminal justice process. The author concludes a comprehensive reform that includes tightening exclusionary rules, along with (minimally) corroboration requirements for death-sentencing, and more appropriately, for convictions in capital and non-capital cases, with a concomitant loosening of standards for relief on appeal, hold the most promise.
The article addresses adopting best practices; assuring compliance by means of exclusion; admitting expert testimony and educating juries; instructing on the vagaries of eyewitness identification; requiring corroboration with independent and reliable evidence; and redressing unsafe verdicts. A 2002 article advocated a return to the Stovall v. Denno two-stage exclusionary rule, at least in capital cases, and beyond that, proposed a bar on execution if a suggestive identification procedure occurred. This article returns to that view, and addresses additional proposals, including those that would modify the analysis required for exclusion consistent with social science findings; require corroboration in eyewitness identification and general cases; require reliable forensic or other evidence for capital sentencing; and repeal the death penalty to assure against irrevocable errors.”

Right Remedy for the Wrongly Convicted: Judicial Sanctions for Destruction of DNA Evidence, 77 Fordham L. Rev. 2893 (2009)
“Many state innocence protection statutes give courts the power to impose appropriate sanctions when biological evidence needed for postconviction DNA testing is wrongly destroyed by the government. Constitutional claims based on wrongful evidence destruction are governed by the virtually insurmountable ‘bad faith’ standard articulated in Arizona v. Youngblood. The wrongful destruction of DNA evidence in contravention of state innocence protection laws, however, should be governed by the standards used to adjudicate other ‘access to evidence’ violations in criminal cases, including disclosures mandated by the rules of criminal procedure, the Jencks Act, and Brady v. Maryland. Under the ‘access to evidence’ sanctions analysis, courts must balance the degree of government culpability in the destruction, the degree of prejudice to the defense, and the strength of the government’s case. In applying this analysis to the wrongful destruction of evidence needed for postconviction DNA testing, courts should give due weight to the exclusive power of DNA evidence to discredit other forms of evidence and prove identity to a scientific certainty. Further, in evaluating the strength of the government’s evidence at trial, courts must carefully scrutinize guilt determinations based largely or exclusively on evidence that has been the predominate cause of wrongful convictions, including stranger eyewitness identifications, non-DNA forensic evidence, uncorroborated confessions, and jailhouse informant testimony. Applying these critical lessons learned from over 200 exonerations to the sanctions determination, appropriate sanctions for the wrongful destruction of DNA evidence include a sentence reduction, a new trial, or dismissal.”

Safety in Numbers?: Deciding When DNA Alone Is Enough To Convict, New York University Law Review, Forthcoming
“Fueled by police reliance on offender databases and advances in crime scene recovery, a new type of prosecution has emerged in which the government’s case turns on a match statistic explaining the significance of a ‘cold hit’ between the defendant’s DNA profile and the crime-scene evidence. Such cases are unique in that the strength of the match depends on evidence that is nearly entirely quantifiable. Despite the growing number of these cases, the critical jurisprudential questions they raise about the proper role of probabilistic evidence, and courts’ routine misapprehension of match statistics, no framework currently exists – including a workable standard of proof – for determining sufficiency of the evidence in such a case. This article is the first to interrogate the relationship between ‘reasonable doubt’ and statistical certainty in the context of cold hit DNA matches. Examining the concepts of ‘actual belief’ and ‘moral certainty’ underlying the ‘reasonable doubt’ test, I argue that astronomically high source probabilities, while fallible, are capable of meeting the standard for conviction. Nevertheless, the starkly numerical nature of ‘pure cold hit’ evidence raises unique issues that require courts to apply a quantified threshold for sufficiency purposes. While any threshold will be arbitrary, I argue – citing recent juror studies and the need for uniformity and systemic legitimacy – that the threshold should be no less favorable to the defendant than a 1 in 1000 chance that the defendant is not the source of the evidence.”

Speaking of Evidence: An Empirical Study of the Reporting of Forensic Conclusions in US Criminal Trials, CELS 2009 4th Annual Conference on Empirical Legal Studies Paper
“This paper uses trial transcript data to try to determine the empirical reality of expert testimony in actual U.S. criminal trials about a variety of forensic techniques, including latent prints, firearms and toolmarks, forensic document analyses, and forensic DNA profiling.”

Validation and Verification of Computer Forensic Software Tools—Searching Function, Digital Investigation, Vol. 6, Supp. 1, Sept. 2009, pp S12-S22
“The process of using automated software has served law enforcement and the courts very well, and experienced detectives and investigators have been able to use their well-developed policing skills, in conjunction with the automated software, so as to provide sound evidence. However, the growth in the computer forensic field has created a demand for new software (or increased functionality to existing software) and a means to verify that this software is truly ‘forensic’ i.e. capable of meeting the requirements of the ‘trier of fact’. In this work, we present a scientific and systemical description of the computer forensic discipline through mapping fundamental functions required in the computer forensic investigation process. Based on the function mapping, we propose a more detailed functionality orientated validation and verification framework of computer forensic tools. We focus this paper on the searching function. We specify the requirements and develop a corresponding reference set to test any tools that possess the searching function.”

Visions of Deception: Neuroimages and the Search for Truth, 42 Akron L. Rev. 739 (2009)
“The historical use of science in the search for truth has posed consistent evidentiary problems of definition, causation, validity, accuracy, inferential conclusions unsupported by data, and complications of real-world applications. As the Innocence Project exoneration data show and the National Academy of Science Report on Forensic Science suggest, our reach in this area may well exceed our grasp. This article argues that the neuroimaging of deception – focusing primarily on the functional magnetic resonance imaging (fMRI) studies done to date – may well include all of these problems. This symposium article reviews briefly the types of neuroimaging used to detect deception, describes some of the specific criticisms leveled at the science, and explains why these small group of studies are not yet courtroom-ready. Arguing that the studies meet neither the general acceptance nor reliability standards of evidence, the article urges courts to act with restraint, allowing time for further studies, further robust criticism of the studies, additional replication studies, and sufficient time for moral, ethical, and jurisprudential rumination about whether the legal system really wants this type of evidence.”

Visions of Deception: Neuroimages and the Search for Truth, 92 Judicature 188 (2008-2009)
“The recently issued National Academy of Sciences report offers a unique opportunity to revamp the forensic science delivery system.”

1 See, e.g., Defense Uses Academy of Sciences Report in Attempts to Discredit Forensic Evidence, N.Y.L.J., May 12, 2009; Defense Counsel View Report as New Weapon, N.L.J., May 11, 2009; Plugging Holes in the Science of Forensics, N.Y. Times, May 11, 2009; Starting the Flood of New Legal Challenges From the NAS Report, Indiana Defender, May 2009, at 1.

2 See generally Ten Ways You Can Use the NCSTL Web Site,NIJ Journal, No. 263.

Posted in: Uncategorized