A Case Study in Scientific Litigation Practice - Part 2

Cracking the Code in the Eastern District of Texas - Part 2 of 3

CSI - Courtroom Sciences Inc.


In part 1 of this topic, we described the success we have seen in the Eastern District of Texas by applying psychological research methodology in a series of cases tried in that venue. We also established that the methodology is not venue-specific and can be leveraged in any venue throughout the country. In the second part, we describe what the methodology is and why it works. 

How It Works 

Up till this point, terms such as “psychological technology,” “research methodology” and the like have been used without a description of the precise procedures that constitute these approaches, and so the present section illuminates the actual processes involved in bringing about the results observed in the East Texas litigation outcomes. Briefly, these processes are partitioned into three methodological typologies, reflected in the three types of published articles mentioned in the previous section: 1) trial simulation research; 2) juror profiling; and 3) substantive knowledge of how the specific types of cases under study (in this instance, patent cases) are construed by jurors. 

These three domains interact with each other, for example, as one initially tackles the problem in a given venue, initial exploratory research provides insights into the third domain (substantive knowledge of how jurors problem-solve these particular cases, whatever they may be). Once this knowledge is obtained, it is used to guide research design and implementation (the first domain), making it progressively more predictive. As more research is implemented, methodological shortcomings become eliminated, and greater knowledge of juror profiling (the second domain) accrues simultaneously. In short, it is a process of extended diligence and evolution, based on multiple research exercises, with interactive and synergistic benefits realized over time. 

Thus, in the case of the East Texas problem, initial research leading to substantive knowledge (domain number three) actually occurred in other patent cases in other states first and was later found to also hold for East Texas. Predictive capabilities of the research were subsequently maximized once it was determined how trials in East Texas were conducted – for example, beginning with an initial tour de force performance on the witness stand by the plaintiff inventor. Precisely matching the venue characteristics of recruited panels for the research against those actually seated in the courtroom trials also required multiple projects to achieve optimal results. 

Actual simulation of the courtroom environment in such research was yet another factor leading to predictive validity. Finally, recall the use of the phrase “methodological shortcomings become eliminated,” a substantial portion of this process quite simply involves research skill, protracted sweat, vigilance, and stamina in weeding out procedural errors, preparation short cuts, imbalance in demonstrative exhibits, and other nagging “bugs” that undermine predictive validity. Getting all of the moving parts “right” to the point that research actually wields predictive power is a labor of devotion, skill, and resources that is not a casual undertaking by any means. Additional factors also came into play that will be discussed momentarily, but first, the three components or domains under consideration are examined in closer detail. 

Trial Simulation Research. One particularly important aspect of patent litigation is the fact that, since these are invariably federal court cases, and since federal court judges typically do not allow jurors to be interviewed, little – if anything -- can be known about how verdict and damages decisions are made absent valid trial simulation (i.e., “mock trial” research). In short, without valid research, we do not know what jurors are doing or why, insofar as verdict and damages decisions are concerned. If we are relying on the research to tell us what jurors are thinking, how do we know that the research is sufficiently valid to be truly reliable? 

One way to know if the research is valid is if it predicts. Under the general paradigms that are accepted as reflecting true scientific progress, one first obtains the ability to predict, and from the ability to predict comes the ability to control. Thus, if we are able to predict, then the ability to control is right around the corner. Consequently, documentation of the predictive validity of the research was the first step to establishing the reliability of its conclusions.1,2 As discussed in these articles, many litigators are skeptical about the ability of jury research to accurately predict, but at the same time, they are procuring services in a field with absolutely no barriers to entry and no qualifications or credentialing standards. The American Society of Trial Consultants once held a vote on whether its members should be required to hold any type of credentials, and the proposition was voted down. 

While accurate prediction of trial outcomes is still not generally achievable in all cases, the observation of cases in which accurate prediction did occur in the present efforts (a substantial proportion, exceeding 80%) was a signpost suggesting that the associated findings could indeed tell us what the key operative factors were in precipitating damage awards. Another important signpost was the ability to take findings suggested by one project and determine that the inclusion of additional evidence or arguments suggested by such findings could indeed create the desired results in subsequent research for the same case. 

Thus, the litigators in Forgent v. Echostar mock tried their case four times (after “losing” the first three mock trials) before finding the “holy grail” that would ensure a defense verdict. Of all the cases across which the $1.4 million average damages (cited previously) were computed, this case was the only one that was mock tried four times – and one of the few for which an outright defense verdict was obtained. Coincidence? Hardly. Four of the five defendants settled out of this case for $28 million and only Echostar held out, relying on the research, to obtain the defense verdict at trial. 

To give another example, in one of the cases, three mock juries in the highest damage case involved awarded $2 million, $2 million and $12 million – an average of $5.3 million, and the actual jury verdict was $5.4 million. Compare this to a situation where a project produces three mock juries that each render a defense verdict but the real trial produces substantial damages (a common problem when using “budget” jury research providers). Which project can be counted on to reveal subtle nuances of juror problem-solving patterns that will illuminate a trial strategy that actually works in suppressing damages? Which research findings are sufficiently reliable to use as a basis for re-formulating a defense that has the desired impact on the courtroom floor? From this vantage point, the relationship between prediction and control becomes self-evident. 

Quite simply, as the accuracy of the research continued to improve, the ability to use the associated results to control trial outcomes continued to become more and more effective. 

Since Forgent v. Echostar in 2007, defense verdicts in East Texas have been coming faster and faster. In East Texas patent matters, the process is most often related to identifying systematic patterns of miscomprehension. Since very little of the entire case fact scenario is accurately assimilated by jurors, the challenge is to identify those areas of miscomprehension that harm the defense and find ways to remediate those, and conversely to also identify those areas of miscomprehension that help the defense and enhance those. This approach is particularly relevant for “patent troll” litigation in which junk patents are being asserted, and for which the case only holds “merit” from the standpoint of a lay audience who does not fully comprehend the entire case. 

Concludes in part 3. 


References 

1.  Speckart, G., “Trial by Science,” Risk & Insurance, 2008, 19, no. 3. 
2.  Speckart, G., “Do Mock Trials Predict Actual Trial Outcomes?,” In House, 2010, Summer, 5, no. 13. 

Preventing Nuclear Settlements at Deposition


Download Now

Stay updated: