Probable cause?

October 24, 1997

Should juries get the help of probability theory to guide them in complex trials involving quantitative scientific evidence? David Balding considers the case for and against

A suspect is linked to a sexual assault by apparently overwhelming DNA evidence. In court, however, the jury plainly sees that he does not match the description given by the victim at the time of the offence. Moreover, she says in evidence that he does not resemble the man who attacked her. He has an alibi, not obviously discredited by the prosecution, and nothing else is presented in court linking him with the crime. What should a jury be more convinced by, the incriminating scientific evidence that is impressive but difficult to understand or the nonscientific evidence that seems to point to innocence? More to the point, how can a juror reasonably weigh up the two sorts of evidence and come to a sensible conclusion?

The brief details sketched above outline an actual case whose implications are occupying the minds of some of the country's most senior judges. The defendant, Denis Adams, was convicted in January 1995, but the conviction was overturned on appeal in April 1996. The case was sent back for retrial the following September, resulting in a second guilty verdict. Nevertheless, a second appeal was allowed and is before the lord chief justice and his colleagues in the Court of Appeal. The court is deliberating not only Adams's particular case, but more generally how juries should be guided in their reasoning in complex cases involving quantitative scientific evidence. At the centre of the debate is whether jurors' common sense, long regarded as the bedrock of the English justice system, can cope unaided in such cases, or whether an attempt should be made to explain to jurors the rules generally accepted by experts for weighing up such evidence.

The branch of mathematics concerned with quantitative reasoning in the face of uncertainty is probability theory. Initial developments of the theory were in connection with games of chance involving coins and dice. But as quickly as 1665 the German Leibniz was proposing a probability calculus for reasoning about uncertain propositions, and applying it to legal cases. In the legal setting, probability theory is now often associated with an 18th-century English clergyman and mathematician, the Reverend Thomas Bayes, to whom one of its central results - Bayes Rule - is traditionally attributed.

Given the omnipresence of uncertainty in legal reasoning, the existence of a law professional ignorant of Bayes Rule should be as incredible as the existence of a theatre critic ignorant of Shakespeare. Sadly, this ignorance is the rule rather than the exception. The history of attempts by mathematicians to systematise legal reasoning is as long as it is unsuccessful - if success is measured in terms of acceptance by the legal profession. Overambitious mathematical projects, which reached their height in 1785 when French mathematician Condorcet gave Frederick II of Prussia a weighty volume of calculations aimed at improving the legal system, eventually discredited the enterprise.

Despite these setbacks, the claim that probability theory conveys insights crucial to sensible reasoning, particularly about scientific evidence, has not gone away. Indeed, the advent of DNA evidence, presented in court in numerical terms, has made it almost impossible to avoid. In the US, the issue of how DNA evidence should be explained to jurors was tackled head-on when the prestigious National Research Council convened a committee of distinguished experts to advise the courts. The committee's report was issued in 1992 to such a hostile reception that an unprecedented second committee was felt necessary.

In the UK, on the other hand, different courts took different stances without any clear direction from the senior courts until 1993, when the conviction of Andrew Deen was overturned on appeal, in part because the DNA evidence was held to have been presented misleadingly. Witnesses for the prosecution were found to have fallen into the "prosecutor's fallacy'', so called because it usually - though not always - favours the prosecution. The fallacy is an error in reasoning with probabilities that is analogous to the logical error of confusing "A implies B'' with "B implies A''. For example, "cow'' implies "four legs'' but "four legs'' does not imply "cow''.

In the Deen case, statements were made that confused "there is only one chance in three million of observing the DNA evidence if it is given that the defendant is innocent'' with "there is only one chance in three million that the defendant is innocent, given the DNA evidence''. The two statements are not equivalent, and confusing them was deemed unfair to Deen. Among the repercussions of this judgment was a closer attention to the presentation of DNA evidence in other cases. Additional instances of the fallacy were detected, as well as a number of related errors, and further convictions based primarily on DNA evidence were overturned on appeal.

By the time of the original Adams case in 1995, UK courts were becoming careful about the prosecutor's fallacy, but additional concerns about jurors' potential for misinterpreting numbers presented in connection with DNA evidence remained unaddressed. Adams's defence team was concerned that the jury would be overwhelmed by the DNA evidence, to the detriment of the strong, nonscientific evidence in his favour. To counter this, the defence called Peter Donnelly, now professor of statistical science at Oxford University, to explain the use of Bayes Rule in combining quantitatively different sorts of evidence. This task immediately presented a dilemma: although the prosecution offered numbers to quantify its DNA evidence, the defence evidence, involving, for example, the credibility of an alibi witness, was not easily quantified. Donnelly suggested possible values for the required probabilities - such as the probability that the defendant would produce the alibi witness both (1) if he were guilty and (2) if he were innocent. He showed that, given values for these probabilities which jurors might well regard as reasonable, the case against the defendant was only moderately strong, insufficient perhaps for a satisfactory prosecution.

By a bizarre twist, even though it was the defence that introduced the Bayes rule evidence, without objection, it was this evidence that led to the success of its appeal. The Court of Appeal not only ruled that the Bayes Rule evidence was incorrectly summed up, it went on to say that it should never have been presented: "Jurors evaluate evidence and reach a conclusion not by means of a formula, mathematical or otherwise, but by the joint application of their individual common sense and knowledge of the world to the evidence before them.'' But since the admission of this evidence was not contested, the court did not hear argument on its admissibility and was explicit that the ruling should not be regarded as conclusive.

At retrial, the defence once again wished to explain Bayes Rule to the jurors, but this time the prosecution objected. After hearing arguments in the absence of the jury, and noting the nonbinding nature of the senior court's ruling, the judge was persuaded that the evidence was properly admissible. Indeed, this time the defence went further and, in consultation with the prosecution, presented jurors with a form containing spaces for them to enter their own values for the required probabilities - and calculators were issued to assist jurors with the necessary computations.

For the second appeal, the court has given notice that it intends to hear argument on the admissibility of evidence explaining Bayes Rule and make a more definitive ruling. Given the forcefulness of its earlier ruling, those favouring the explanation to jurors, when appropriate, of probabilistic rules for combining evidence have little ground for optimism. Almost everyone agrees on first meeting the problem that probability theory and mathematical formulae could cause more harm than good among jurors with varying levels of educational background. But few can come up with a good alternative for avoiding the various pitfalls that lie in the path of those reasoning with probabilities without guidance. And what is also troubling is the idea of preventing jurors from hearing about what is widely regarded as the only satisfactory method for reasoning about uncertainty.

Whatever the court decides, perhaps the best outcome would be if lawyers and expert witnesses themselves were to pay more attention to the rules for combining probabilities, so that their understanding could be reflected in more careful and helpful, yet accessible, ways of presenting evidence to juries.

David Balding is professor of applied statistics, University of Reading.

* How Bayes Rule helps jurors weigh up evidence

Bayes Rule provides a way to quantify how much we learn from evidence about theories. The theories need not be scientific, but must have the property that one of them, and only one, is true. For example, suppose that we have some evidence (E) and two theories: innocent (I) and guilty (G). If G were true, the probability that we would have observed E is denoted Pg. Similarly, under I, the probability of observing E is Pi. The key quantity that measures how much information E gives in distinguishing G from I is the likelihood ratio, R, which is the ratio of Pg to Pi. If R is very large, E strongly favours G over I, and vice versa. If R = 1 then E is irrelevant in distinguishing between G and I.

In the two-theory case, it is easiest to work with odds. For example, odds of 3-1 on G means that there are three chances in four that G is true. These odds are related to bookmakers' odds, but they reflect probabilities rather than payouts, which are rarely the same. Bayes Rule says that the odds on G after observing E are given by R times the odds before E was observed. If there are more than two theories, the calculations are similar, but more complicated. When there are several different items of evidence, Bayes Rule is applied to each in any order, and the final odds for one item become the starting odds for the next. Also, the value of R at any stage may depend on the items of evidence previously analysed.

In criminal trials, "innocent until proven guilty" might be interpreted as meaning that the initial odds on the defendant's guilt should be the same as any possible culprit's. For "anonymous" crimes in large cities, there could be many possible culprits so that even a very large value of R may not suffice to bring the low starting odds on G up to convincing final odds.

In the Adams case, the prosecution reported a value of R of 200 million for the DNA evidence. The defence challenged this, suggesting it may be as low as two million. Also, by taking into account values of R less than one for the non-DNA evidence, together with low starting odds corresponding approximately to the number of men in the area of the crime scene at the relevant time, it suggested that the final odds on G might reasonably be assessed to be as low as 3 to 1, presumably insufficient for a criminal conviction.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter
Register
Please Login or Register to read this article.

Sponsored