The Nuclear Verdict blog series

Part V - Thoughts and considerations

George R. Speckart, Ph.D. & Bill Kanasky, Jr., Ph.D.


In the first 3 parts of our nuclear verdicts series, we covered an overview of the definition of a nuclear verdict, some historical context, and the five causative factors of nuclear verdicts. In part 4, we outlined specific examples of how a scientific approach to litigation research has resulted in the prevention and suppression of nuclear verdicts.  In the final part of this series, we offer some concluding thoughts and considerations.


Closing Considerations

It is of course possible to approach this issue academically and design studies that will identify which of the causative factors identified in the earlier section wield a predominant influence over nuclear outcomes. Such research would involve dissecting multiple cases, but would carry as an encumbrance the labeling, definitional, and identification problems mentioned previously. It would also have to be funded, and the costs would not be trivial.

Given the availability of the scientific method, the most pressing question therefore is, “What do policy makers want?” Do they want to examine the potential antecedents of the nuclear verdict and formulate theoretical conclusions about how they create the observed effects? We already know that some of the factors (e.g., problematic witnesses and egregious conduct) can be fatal to a case, and that prior scientific juror profile research can pre-empt stealth and other punitive jurors (Speckart, G. “To down a stealth juror, strike first,”National Law Journal, 1996, vol. 19; Speckart, G. “Identifying the plaintiff juror,”For the Defense, 2000, vol. 42).  But what do policy makers really want?

It seems clear that what legal teams and their in-house directors really want is suppression and control. We know, however, based on the previous observations, that these are already available for the asking. If that is the case, then why does this issue remain as a challenge?

We have already documented the putative “fear and loathing” of science in the legal industry. While we doubt that this state of affairs applies to everyone in the industry, there does appear to be an unwarranted skepticism that science would actually work. There are other factors at work as well. For example, one litigator told me that “some people would find the claim that you can predict verdicts to be offensive.” We are not sure what the offensive nature of the claim is, but the statement warrants consideration.

The jury research industry is an enormous one, with hundreds, if not thousands of practitioners. Jury research is done, its clients report, not to predict damages outcomes but to predict “themes.” In other words, they are saying “we believe the research predicts themes (what jurors will think in response to the case) but not damages (how much they will award).” However, when this position is subjected to scrutiny it starts to fall apart: How can one segregate and predict one but not the other? The damages are the outgrowth of the themes that jurors find to be persuasive. If one is accurately forecasted, then so is the other. If it is not, then neither is the other.

Additionally, mock jury research is often done incorrectly (i.e., not scientifically, thereby defeating predictive validity). Specifically, gathering a group of friends and family members to listen and talk about your case is not valid scientific methodology. Mock jurors need to be carefully recruited, screened, and demographically matched to replicate who will likely show up in the courtroom. This is a tedious process that is often skipped in favor of cost savings.

The same is true for “real time feedback” dials that are often used during mock trials. Real jurors do not judge attorney presentations and witnesses with fancy dials or any other gadgets; therefore, predictive validity can never be attained using this system. Unfortunately, many clients are enamored with the “wow” effect of such technology, falsely assuming that more sophisticated technology equals more predictive validity.

One of the authors recently asked an insurance claims specialist, “what do you think those dials, and fancy lines on the screen, are actually measuring?” The claims specialists responded, “Hmmm… I really don’t know, but boy is this stuff is cool!” In another instance, an equipment provider of the dials and meters admitted to us that his clients liked it because it was “eye candy.” This very same technology was used during the 2016 presidential campaign TV coverage, as several news outlets broadcasted focus group participants (voters) responding to debate performances by each candidate. Most of the results of such focus groups showed Hillary Clinton clearly outperforming Donald Trump over and over again. How did that work out?

Perhaps the most serious shortcoming in “electronic dial feedback” research is that data is being collected in real time on moment-to-moment responses, whereas jurors do not deliberate based on these responses – they deliberate instead on what they retain in memory and retrieve from memory much later in the deliberation room – a truncated subset of their reactions that has invariably morphed into something far different based on how memory operates. Finally, one of the key functions of jury research is information reduction – cutting back the massive number of potential perceptions of the case into those which are more correct than clever. “Electronic dial feedback” results do just the opposite, piling on massive additional amounts of data that simply confound the issues.

Some of the other factors that invalidate mock trial methodology include: a) not showing witness testimony, or choosing excerpts from videotapes that are biased or unrepresentative; b) leaving out key evidence of various types; c) utilizing a watered down plaintiff case that is diluted, distorted or incomplete (even poor graphics on one side can cripple a project); and d) inadequate or improper simulation of actual trial conditions (as discussed in the immediately preceding section).

The authors have “parachuted” in on many high exposure cases in which a “mock trial” was already performed, with results fully favoring the defense. When redesigning and repeating mock trials, on the very same case, we often see nuclear verdicts from mock jurors in deliberations. Many clients, obviously without any scientific training, assume that “a mock trial is a mock trial is a mock trial.” Nothing could be further from the truth, as the validity and reliability of mock trial results is fully dependent on the mock jury sample composition, research design, methodology and analysis.

However, it does appear the success of the Reptilian manipulation tactics against defense witnesses has indeed “woken up” the insurance defense industry. One of the current authors (see Kanasky, W. F. "Debunking and redefining the plaintiff Reptile theory,”For the Defense, 2014, vol. 57; Kanasky, W. F. Derailing the Reptile Safety Rule Attack, 2016, www.courtroomsciences.com;Kanasky, W. F., & Loberg, M. “Rehabilitating the defendant in the reptilian era: A neurocognitive approach,”For the Defense, 2017, vol. 59; and Kanasky, W. F., Speckart, G., Parker, A “Early Anti-Reptile Tactics May Save Millions of Dollars: The role of the litigation psychologist and why it matters,”Trucking Industry Defense Association, 2019, Spring Newsletter) has debunked and redefined the plaintiff Reptile Theory and has provided a blueprint in how to defeat the Reptile methodology in both discovery and trial. In particular, Kanasky, W. F. (Derailing the Reptile Safety Rule Attack, 2016, www.courtroomsciences.com) offers a deep psychological and scientific breakdown of the Reptile questioning tactics and how to thwart them with high levels of success. Additionally, the same author and a defense attorney invented and implemented the “Reverse Reptile” (Motz, P., Kanasky, W. F., Loberg, M., “The ‘Reverse Reptile’: Turning the tables on plaintiff’s counsel,” For the Defense, 2018, vol. 60) in which a strategy was developed to use Reptile tactics on both plaintiffs and adverse co-defendants.

Our jury research results, along with innumerable stories from attorneys about deposition and trial testimony successes, clearly illustrate that the scientifically-supported “anti-Reptile” methodology is seeing great success at the witness-level, but perhaps is lacking at the jury research level due to the insurance defense industry cost-savings philosophy. Indeed, a likely explanation for why witness training advances over the past decade have “caught on,” while resistance to scientific research continues to persist, is the lopsided cost differential between the two – even though the savings from obtaining scientifically-derived damages estimates dwarfs the costs of the research.

Ultimately, the decision to use science will rest on the institutional and policy barriers inherent in the client’s organizational setting. For example, in the insurance industry, the claims department is responsible for duty to defend and has to pay for jury research. But the results of this research benefit the indemnity side of the house, not the claims side which has to pay for it. As one insurance insider told us, “No one from the claims side wants to spend $50,000 to save $200,000 from the indemnity side of the house.”

As such, the plaintiff’s bar has fully taken advantage of this claims-indemnity conflict of interest by outmaneuvering the defense from the moment the case is filed. By the time excess coverage kicks in, plaintiff’s counsel often has the defense behind the eight-ball. Excess coverage claims people have no problem spending money to properly defend the case, but it is often too little too late. The result: a nuclear verdict, or equally as bad, a nuclear settlement.

While the nuclear verdict topic is attracting strong attention today, no one seems to be talking about how the nuclear settlement is becoming a major problem. Paying out nuclear settlements inevitably leads to more lawsuits filed against that particular client, as word spreads fast in the plaintiff’s bar on which companies are fearful of trials and would rather pay their way out of trouble.

In short, when those who decide whether to use the research are evaluated solely on the basis of short-term budgetary constraints, one is likely to encounter “budget” research that is unscientific. In general, those who have to pay for the research are not the ones to reap the financial benefit, so it will not get done. For science to permeate litigation practice, institutional changes are required that tie cost savings on a long-term basis to policy decisions made for short-term operations.

Preventing Nuclear Settlements at Deposition


Download Now

Stay updated: