Do virtual mock trials produce valid results?
We have spent over 35 years investigating and making adjustments to the research methodology used in conventional (“live”) mock trials with the objectives of evaluating and maximizing their ability to predict actual trial outcomes (Speckart, 2008, 2010). The fundamental principles underlying these efforts may be summarized by the general conclusion that, the more closely a research project is able to simulate actual trial conditions, the more predictive the research results will be compared to actual trial outcomes.
Over this time frame we have observed instances in which mock trials have predicted actual trials with remarkable accuracy, including, for example, a product liability case which was the largest at its time in California, in which the mock juries awarded an average of $58 million, and the actual jury awarded the same amount. Of course, there are also instances in which predictive validity was not obtained, but the distinguishing characteristics of accurate prediction were invariably tied to how well the mock trial elements mirrored those of the real trial.
Over the years, some clients have requested jury research using online tools. This digital research was often conducted to allow for an increase in sample size or to test multiple litigation variables, which may have been cost-prohibitive to do via in-person research. With the onset of the Coronavirus pandemic, clients began to request “virtual mock trials” – essentially a substitute for in-person mock trials. However, this request for “virtual mock trials” also raised questions about how closely actual trial conditions could be simulated, and thus, the degree of predictive validity that could be obtained. The necessities of requiring some kind of feedback for pending litigation in the era of COVID-19 prompted us to refine our online jury research and several cases were tested using an online platform with attorney presentations and deliberations among breakouts of test respondents forming multiple juries.
Your author, with a background including a heavy emphasis on psychological measurement and research design methodology, was initially skeptical about whether these online projects could, in fact, produce valid results. Once enlisted into the practice and implementation of virtual or “online” mock trials, however, it became apparent that the quality of useful information that was learned in the process could not be discounted. The perceptual foibles that encumber juror information processing were in full display, for example, in a patent case with technical arguments that were not easily or readily assimilated. Having run dozens of actual mock trials in patent cases in the same venue (the Eastern District of Texas), it became clear that the kinds of juror response patterns that were taking shape were very similar in comparing virtual versus in-person mock trials. Indeed, the feedback that was obtained on how jurors distort, modify, or truncate information in the virtual mock trials was clearly instrumental, even critical, to forging the most effective presentations possible in the forthcoming actual trial.
It did not take much in the way of additional demonstrations to create a convincing case that online projects were worth the expense, time, and effort. However, mock jury research has several goals, and obtaining valuable insights into the manner in which jurors process information and problem-solve the case is only one of them. Another goal is to ascertain the probable verdict outcome and concomitant damages awards in the event of a plaintiff verdict as a means to guide settlement negotiations. This latter goal is one in which those with a heavy bent toward scientific rigor are more likely to insist that actual trial conditions be simulated as closely as possible. In other words, those of us with a more traditional scientific background are more prone to conclude that online research will be handicapped in forecasting actual exposure in specific dollar amounts due to the inherent disability of such research to be truly realistic, i.e., capable of simulating actual trial conditions.
At this point we reach a fork in the road where the way ahead is currently unknowable. The first question mark is associated with the determination of how accurate the present database of existing virtual mock trial results actually is in predicting verdict and damages. Ascertaining the predictive validity of mock trial research is a long, protracted process that requires accumulation of multiple actual trial outcomes with which to compare the jury research outcomes. There does not, at present, exist a sufficient data record of actual trial outcomes with which to compare the associated virtual mock trial results owing to the backlog of cases that are still waiting to be tried.
At this point the “inability” of virtual mock trials to predict actual verdict and damages is therefore an assumption based on the fact that virtual projects are missing vital actual trial components (e.g., extended witness testimony). It was noted earlier that insofar as identifying the manner in which jurors process information, virtual or online research was more robust (immune to methodological shortcomings) than initially supposed. It is conceivable that this same state of affairs may become observed over time in conjunction with forecasting verdicts and damages as well, but as we have stated, this remains to be seen.
Speckart, George, “Trial by Science,” Risk & Insurance, October 2008, vol. 19, no. 13
Speckart, George, “Do Mock Trials Predict Actual Trial Outcomes?” In House, Summer, 2010, vol. 5, no. 13
Counter-Anchoring Damages is More Important than Ever