Bringing Objectivity into Settlement Decisions

Published Works / Trial Consulting

318 views
0 Likes
0 0

Share on Social Networks

Share Link

Use permanent link to share in social media

Share with a friend

Please login to send this document by email!

Embed in your website

Select page to start with

17.

1. Bringing Objectivity Into Settlement Decisions: The Nexus of Law, Science and Ethics in Litigation Risk Assessment

18. Copyright 2013 Litigation Psychology, LLC. All Rights Reserved.

7. 7 come to my attention more than once, and each time the researcher had no credentials in psychology or communication at all – yet they were working for some of the largest corporate clients in the country. There is something about the field of psychology that causes many people to think “anyone can do that.” Perhaps more importantly, trial lawyers in particular – and their clients – seem to believe that they – the trial lawyers themselves -- are supposed to be “psychologists.” If that sounds far-fetched, think closely for a minute about jury selection. The entire function of jury selection, from start to finish, is actually geared toward the prediction of behavior. After all, what is the trial lawyer doing in jury selection if not trying to eliminate jurors who will vote against him at a later point in time? This is nothing other than attempting to predict how certain individuals will behave in the future. Yet the vast majority of lawyers attempt to do this themselves, and their clients expect them to. When recently a potential client of mine explained to me that he was begging his corporate client to authorize the retention of a jury consultant, the client refused, saying to his lawyer “that’s what we pay you to do.” In other words, his client expected his lawyer to also be a psychologist. The prediction of behavior is difficult enough for psychologists who spend their entire careers studying the issue, yet lawyers take on this task by themselves every day and no one seems to think there is a legitimate concern with assigning a task of this nature to the legal team. Instead, the rationalization is made that “no one knows” or “it’s an inexact science” when in fact reliable predictors of such behavior can be identified, if the research is designed and statistical analyses performed to reveal accurate juror profiles by someone with the proper training. The focal point in this endeavor is often the Supplemental Juror Questionnaire. This is a document administered to prospective jurors, before jury selection, who fill it out with their background information; demographic data; experiences; attitudes; beliefs; and other characteristics. What is it for? To help determine whether counsel is looking at a “good” versus “bad” juror. What is a “good” versus “bad” juror? A juror who will vote for versus against you at some later point in time. In short, the Supplemental Juror Questionnaire is a document that is intended to provide data for the prediction of behavior. How are these documents produced? Typically they are produced by the lawyers, often by themselves, but occasionally with the help of a jury consultant, about whom the lawyers generally have no knowledge with regard to that consultant’s credentials, background or capability in the realm of the prediction of behavior. I have in front of me, as I write, an e-mail sent to all of the members of the American Society of Trial Consultants by a “jury consultant” that reads as follows: “I have been engaged to assist in the drafting of a jury questionnaire for the defense in a sexual harassment case. I would appreciate some help with ideas for the questionnaire.” The person did not know how to identify which questions would predict plaintiff versus defendant verdict orientation. The reason? His background was accounting. His qualifications for being a jury consultant were that he had obtained a CPA degree.

10. 10 side, the insurance carrier stared down his opponent during mediation and the opponent “blinked.” The obvious cost effectiveness of valid research, however, escapes most decision-makers when it comes time for settlement. Instead, numbers in the millions are somehow “divined” with no factual basis whatsoever for inferring what a jury would actually do with the case. Those making such settlement decisions generally resist the notion that they in fact do not know what a jury would actually do with the case. They claim that they do, but when pressed, they cite “intuition” based on “experience” or various other forms of guessing that are indistinguishable from a “hunch” – the same type of hunch that the claims adjuster was making in writing the initial check for $750,000 in the first place. What is particularly noteworthy about the incident in which the claims adjuster was writing the check for $750,000, however, is that the damages were always expected to be below $1 million, yet the insurance company derived a clear benefit from conducting the research anyway. Most of those in a position to utilize jury research automatically assume that cases under $1 million are “not worth it” or “do not warrant this type of work,” yet here is just one clear instance of a rate of return on investment of over 800% (a savings of $350,000 based on research costs of about $40,000). Imagine how much could be saved in the type of case mentioned at the outset, in which the trial team simply decided that, if it could be settled for under $10 million, they would take the deal. The key themes of the present treatise are, therefore, not only the potential for enormous cost savings through the use of science in determining proper settlement amounts, but also that many cases can be won instead of settled – if the trial team is really interested in winning. The East Texas IP litigation arena is characterized by settlement after settlement, with very few trial teams willing to go the lengths characterized by the Forgent v. Echostar matter, wherein four mock trials were necessary in order to find the winning strategy. It should be noted that there were four other defendants in the Forgent v. Echostar litigation who each settled their cases for large sums, doubting that the research was actually showing them how to win. Moreover, of the dozens of IP cases we have worked on in East Texas, nine out of ten run one mock trial and then simply settle, instead of “doing it over and over until you get it right” as Echostar did. So, again, the issue boils down to cost savings: Since the four mock trials (and the cost of trying the case) in Forgent v. Echostar were certainly cheaper than the amount needed to settle a $70 million claim, how many other cases that were settled for millions could have been won if the second, third or fourth mock trial had been conducted? Moreover, once again, the ethical issue arises – whose money is being wasted here? How many additional lawsuits are instigated when parties resort to “instant settlement” instead of really fighting to win? Is there accountability for this? If so, where? While the skeptical reader may posit that these examples are cherry-picked from a host of others that would not support the positions advocated presently, it should be pointed out that these conclusions are based on observations from how over one thousand trials have actually been

13. 13 other words, the trial team has decided that actually finding out what the decision-makers will do with the case is not important enough to warrant attention. The other leading reason – “too expensive” – is similarly ironic when the enormous amounts wasted in settlement are taken into account. An 800% rate of return on investment in a case with damages under $1 million points to staggering amounts of potentially wasted money in the larger cases that are settled routinely without the benefit of science. The example of the insurance adjuster writing a check for $750,000 was specifically chosen because it was the most conservative exemplar available, in terms of the amount of money at stake. When settlements are made in the millions, the amount required to perform the research to find out what a jury would actually do is typically a minute percentage of the probable error arising from guesses and hunches. Moreover, the Forgent v. Echostar example points to the fact that many defendants settle when valid research shows they can win. In the heavy equipment case (case #2 in Table 1) the contrary position was observed: the defendants who went to trial thought they could keep damages low, declined to participate in the research (share costs), and were later hit with a $55 million verdict. In a similar instance, I implored a trial team to conduct a mock trial in Southern Texas and was met with the response, “A mock trial is a luxury.” That trial team was subsequently hit for $61 million following their attempt to save on the cost of research that, in all probability, would have alerted them to the dangers that lay ahead. As the Chinese proverb goes, “Cheap is never cheap and expensive is never expensive.” Of course, we have also seen situations also in which trial teams declined to do research and they later won. Again, however, it is important to keep the magnitudes of the dollar amounts in perspective: The cost of not doing research when it is needed dwarfs the costs of doing the research when it is not needed. Even in cases where winning is not possible, conducting careful research communicates to everyone that the trial team is pulling out all the stops, leaving no stone unturned, in making every attempt to secure a favorable outcome. Indeed, even the Wall Street Journal in October of 1989 quoted a famous trial attorney as stating that failing to perform jury research in big cases “borders on malpractice.” We have also been involved in situations where it was obvious that a mock trial was not necessary, and have advised clients to settle immediately without any research when a case fact scenario, or videotaped witness testimony, clearly pointed to imminent disaster. Occasionally one can make reliable inferences from prior trial outcomes when there is patterned or serial litigation in the same venue that involves the same witnesses, same types of claims, and the same sources or causes of damages. Hence, the recommendations made presently are not “Mock try every case – or else.” Rather, the recommendations are to appreciate how the science works, find out who knows how to implement it in a valid manner, and use those resources intelligently instead of guessing when the situation is ambiguous. One of the most common phrases we hear from potential clients is, “If the case does not settle, then we need to do a mock trial.” Beside the fact that,

16. 16 About the Author Dr. Speckart received his Ph.D. in Psychology from UCLA in 1984 with a specialization in personality measurement. He has been active in the jury consulting field since 1983, and has conducted over 600 mock trials and focus groups in pre-trial research for numerous types of litigation. Dr. Speckart has worked with litigators in over 150 jury selections, beginning with Dalkon Shields cases in 1983, the Agent Orange litigation in 1984, and Exxon Valdez litigation in 1994. His area of emphasis has shifted to patent litigation over the past decade as a result of increased demand for assistance in this complex area of jury psychology.

5. 5 many years of progress in the field before the state of the art in pre-trial research evolved to the point at which validity actually started to become achieved. In Table 1 above are actual results that are exemplars of mock trial research that accurately forecasted verdict and damages from real trials. The first one shown, from the Exxon Valdez litigation, was focused exclusively on punitive damages, since that was the sole area of interest by the trial team. Four mock juries awarded an average of $5.2 billion, and subsequently the real jury awarded $5 billion (it should be noted that Exxon’s stock went up immediately after the jury verdict, as Wall Street had expected a potential punitive award of $10- 15 billion). The second project involved one of the world’s largest heavy equipment manufacturers, in which an operator received third degree burns from a leak in brake fluid that became ignited. The client settled out after the mock trial, while the remaining defendants (who did not conduct mock trial research) got hit for $55 million in Los Angeles County. In the third case, a discrimination suit was brought by a housing developer against the city. While the jurors agreed on liability, they did not think damages were warranted to any significant extent, and the results were consistent throughout the research and real trial. The results in Table 1 represent just a small sample of potential exemplars from our database of over one thousand mock trials that illustrates the validity of research that has been implemented appropriately. Of course there are situations in which predictions can go wrong, particularly with unfortunate rulings by the court or unexpected performance by key witnesses. But, overall, barring unusual circumstances, the science works – as one might expect, if one obtains a representative sample of test respondents, and provides the input that the real jury would receive at trial, it is a simple proposition that the sample will do more or less what the real jury does in response to the same stimuli. Greater accuracy in forecasting specific numbers is achieved Jury 1 Jury 2 Jury3 Jury 4 Average Actually Awarded Exxon-Valdez $2 B $3 B $4 B $12 B $5.2 B $5B Heavy Equipment Burn Case $25 M $37 M $112 M $58 M $55 M AHDC vs. City of Fresno $1,000 $1 $10,000 $3,667 $1

2. 2 Abstract: Over the last several decades, refinements in psychological research methodology as applied to litigation risk assessment have led to increased validity and precision, making it possible to accurately forecast jury awards in many cases. Rigorous application of scientific research design principles has obviated the need to guess, or make hunches, in determining probable damages outcomes in litigation. The continuing lack of use of such scientific research tools raises ethical questions as to whether cases are being settled for amounts that diverge substantially from what an actual jury would do with the case. Exemplars are provided showing that failure to objectively determine likely jury damages in making settlement decisions often leads to considerable waste. The question is raised as to whether accountability for such waste should be considered and reconciled. Ethical questions also arise in with regard to settling based on “nuisance value” when the case has no merit, thus inducing the proliferation of more frivolous lawsuits. T he intervention by psychological research into litigation strategy is generally considered to have begun in the late 1970’s, although the study of jury psychology dates back to the 1950’s with Hans Zeisel’s seminal work in the criminal field. Indeed, the study of jury psychology was generally dominated by the focus on criminal juries until the last few decades, when damages awards in civil cases began to reach staggering levels. With the amount of money at stake reaching into the billions since the 1990’s, increasingly sophisticated means of estimating and forecasting exposure have become utilized in civil cases by trial teams and their consultants. Notwithstanding the popularized methods depicted in the film Runaway Jury, the methodologies utilized have adopted various approaches, with varying degrees of legitimacy, as far as scientific rigor is concerned. Typically, the questions surrounding “what works and what doesn’t” entail a consideration of scientific research principles, although other factors, as we shall see, come into play. In the late 1970’s, defense counsel needing help in an IBM antitrust matter approached a marketing professor at the University of Southern California: They were desperate to know what the jury was thinking, and proposed to the professor that he assist them in obtaining a group of observers, matched to the jury panel, who could be seated in the audience of the courtroom each day and subsequently interviewed each evening to obtain specific feedback on courtroom events (e.g., witness performance, comprehension of case issues, agreement with arguments, and ultimately verdict and damages dispositions). This event led to the development of what is now known as a shadow jury, and this particular service of obtaining a panel of courtroom observers is now offered by many trial consulting firms throughout the country for heuristic and tactical feedback during trials. Subsequently, various forms of trial simulation, or mock trial methodologies emerged to attempt to identify the relevant themes and issues that would resonate with the jury before actually going to trial, with the additional goal of striving to get a handle on damages for purposes of estimating exposure. The overriding theoretical impetus behind the evolution of trial sciences has been that the true determinants of verdicts and damages were extra- legal in nature in a general sense, and specifically,

3. 3 a matter of communication and psychology. As a result, PhD’s in communication and psychology were increasingly sought by trial teams and their clients in the early 1980’s to assist with the design and implementation of mock jury research, shadow juries, and other activities. As knowledge accumulated – that is, as specific determinants of psychological decision-making processes of jurors began to be identified – the state of the art advanced quickly in areas that were found to enhance persuasion of jurors. To ground this process of advancing the state of the art in reality, post-trial interviews of actual jurors were used as a benchmark to test the validity of research activities with mock jurors. Data obtained from real jurors became a basis for inferring the extent to which pre-trial research with mock jurors was accurate, or “hitting the mark.” With repeated actual trial results over the years, trial teams and their consultants were able to compare what mock jurors versus real jurors were deciding in various cases by stacking up mock trial data against post-trial interviews. This accumulation of knowledge led to an increased awareness of how jurors actually make verdict-related decisions in civil cases and the manners in which mock trial research could fail, resulting in significant refinements to mock trial research methodology for ensuring accuracy of the results. Over time, experienced jury consultants became aware of a host of procedural and methodological pitfalls that could lead mock trial research astray. Many of these pitfalls were associated with the threats to research validity that are known in academic treatises on psychological research, while others were specific to the field of litigation research in particular. Validity: Does It Work? In research parlance, the term “validity” refers to the extent to which research results can truly be used to infer real-world outcomes; in other words, are the results of a mock trial actually predictive of deliberation outcomes in a real trial? Meeting the criterion of validity – the gold standard of research – requires, among other things, a useful theoretical framework for how jurors actually make decisions. That is, in order to “capture reality” it is necessary first to know how the phenomena under observation (i.e., jury verdict decisions) are actually generated and produced, in order to ensure that critical antecedents or determinants of such phenomena are not left out of the research design. In the early stages of trial consulting, the adage became widespread that “jurors make up their minds during opening statements.” This is an assertion that still generates debate, but as a result of the accumulated knowledge generated by the last 30 years of research, we now know that the causes of verdict decisions are truly multi-faceted or multi- dimensional. While some jurors undoubtedly make up their minds very quickly, the research is now clear that the majority of jurors make up their minds as a result of a multiplicity of factors other than simply just opening statements, including the quality of the graphics or visual aids, and the persuasiveness of the witnesses. Advancing the state of the art in litigation, therefore, has naturally led to the increased sophistication of: (1) persuasive approaches to demonstrative aids and computer-driven graphics technology, leading to the ultimate development

6. 6 by replication (i.e., averaging over more juries), but the method is sound. Don’t Try This At Home In practice, however, litigators are all over the map as far as their assessments of the true utility of using a jury consultant. The determinants of this wide variability in assessments are rather straightforward, however, from a scientific point of view: Mock jury research is psychological research, which is well- known in academic circles as having serious pitfalls in terms of methodology. The types of methodological problems that threaten validity in psychological research are the subjects of numerous venerated treatises in academia that require years of study by graduate students in psychology before they are deemed qualified to conduct such research. In many respects, it amounts to the quintessential example of “Don’t try this at home” as a result of the relatively simple appearance that well-designed research presents to the untrained observer: It looks easy but achieving validity requires appropriate background, training, experience and credentials in order to design and implement the research, as well as collect and analyze the data, correctly. The reality of the jury consulting field presents something far different than the rigors of academia. In practice, there are no barriers to entry in the jury consulting field, and the only requirement for becoming a jury consultant is to assert that you are one. As a result, the field is full of practitioners that literally come from the ranks of pre-school teachers; acting coaches; and even receptionists and cooks, designing “psychological research” (i.e., mock jury research) for corporate clients in multi-million dollar cases. In the field as practiced today, these individuals claim to be peers of trained Ph.D.’s with strong backgrounds in research methodology. Since litigators typically make choices on jury consultants based on whom they like instead of who the jury consultants are (in terms of background and credentials), the result is a great deal of bad research permeating the industry, leading to the common misperception that mock trial research is inherently unreliable. When debate arises as to whether mock trial research “actually works,” the correct answer is, “Of course – if you know what you are doing!” I was recently talking with a jury consultant running a shadow jury for an automotive manufacturer who was distraught because the shadow jury told his clients that they were headed for a defense verdict, when the real jury later awarded a large amount with punitive damages. It turns out, however, that in order to obtain valid results from a shadow jury, it is important that a psychological match be made between shadow jury members and real jurors when selecting the shadow jury panel in the first place. However, the jury consultant did not have any background in psychology, and merely chose the shadow jury based on who “looked good” (which incidentally is also how the client chose the jury consultant). As a result, there was no psychological research methodology applied to the critical role of selecting participants for the project because the jury consultant had no credentials or training in psychological research methodology. At the end of the day, all the lawyers and the clients knew was that “jury consultants are not reliable.” This precise scenario with errant shadow juries has

8. 8 The area of jury selection is just a small tip of the iceberg. The vast chunk underneath the surface is the enormous arena of settlement, which is as we know the chief manner in which lawsuits are resolved. However, the issues are similar: Instead of predicting behavior individual by individual, the decision- maker is now faced with the task of predicting the behavior of a jury. How are settlement decisions made? How does one decide how much to pay to dispense with a case? Clearly, settlement decisions are based on a number of factors, but one of these is certainly what a real jury would likely do with the case. In order to make this determination, one must have access to research that achieves the goal of validity, as defined previously. What Is The Problem? As the preceding discussion suggests, the validity of the research is likely to be a function of the knowledge, training and credentials of the researcher. Since the time that it became known that validity could in fact be achieved, as the examples in Table 1 show, numerous examples of well-designed research have accurately forecasted jury verdict awards. However, in the vast field of trial consulting, there are more examples of mock trials that have not. There are two main antecedents of invalid research. In many cases, the trial team simply does not want to spend the money, time and/or effort that is required to run the research adequately. In other cases, the cause for the faulty research is the same as before – individual practitioners who do not have the requisite methodological training are conducting psychological research, and because they do not know the appropriate design criteria, they are obtaining biased results. Consequently, those who make settlement decisions are apt to doubt the reliability of the research when coming up with a dollar figure for dispensing with a case, and end up making such decisions based on “hunches” – hunches that can be off by far more than it takes to pay for the proper research to determine what a jury would actually do with the case. During a recent seminar on IP litigation, an esteemed federal judge, who has presided over literally hundreds of cases, was commenting on the difference between focus groups and mock trials, stating that he preferred focus groups since, with mock trials, he said, “I have never seen a law firm lose a mock trial that it paid for.” In my presentation which followed, I described a situation in which four mock trials were conducted for the defendant in a $70 million patent case, where the first three ended up in dismal defeat for the client. In the fourth, a novel decision was made to stipulate to infringement and merely contest validity (Forgent v. Echostar, 2007, Eastern District of Texas, Marshall Division). Only this fourth mock trial with the novel strategy yielded a defense outcome. Subsequently, the same strategy was taken to the courtroom floor, resulting in a defendant verdict for the client – a verdict that the legal community is still talking about to this day, since the Eastern District of Texas is so notoriously difficult for defendants. The aforementioned patent case, of course, reflects yet another example of what can actually happen in the courtroom if pre-trial research is correctly

9. 9 designed and implemented and if the trial team has the stamina to keep working on it (through research) until they get it right. Those who believe that such measures do not make a difference are not to blame – they simply do not know that there are methodological design criteria for the implementation of valid research, and they are not aware of the guidelines for determining who utilizes such criteria in their work (and in case it is not already clear, being a federal judge and presiding over hundreds of cases does not help in this area). However, what is sometimes shocking is not that people do not know – it’s that they don’t care. Does Anyone Want $5 Million? In a very serious legal malpractice case involving potential damages of nearly $100 million, I was discussing the possibility of conducting a mock trial with lead counsel. She told me, “If the client can settle it for under $10 million, they are going to do that.” I asked her, “What if a jury would only award $5 million? What if a jury would only award $2 million? What if the jury would give a defense verdict?” Her reply shocked me: “They don’t care,” she said. I sat back in my chair and tried to absorb the implications of this position. The first factor that came to mind was an ethical one: Is it acceptable to spend someone else’s money in a manner that is not necessarily required? If you can take a case to court and win, or get out with a $5 million verdict, is it ethical to pay $10 million to “make it go away”? Moreover, is it even ethical to pass on the opportunity to find out what the options are, in terms of likely jury outcomes? Ultimately this case did settle for an amount that “seemed reasonable” and no one actually determined what a jury would have done with the case. Other counsel claim that they do in fact conduct “research” on their potential verdicts by hiring firms that conduct archival searches of verdicts across the venue on similar cases. I have seen, for example, spreadsheets for jury verdicts on asbestos cases in New York, and the results ranged from about $500,000 to $115 million. Obviously, the determinants of those dollar figures are not to be found simply in the facts that asbestos caused the injury and New York City is the venue, since all of those verdicts had those facts in common. This example is not extreme, yet lawyers and claims personnel in the insurance industry commonly use these spreadsheets to attempt to put a value on their cases. When the diversity of the numbers is too great to arrive at a point estimate, the preferred methodology is then to resort to the “hunch.” Those in the field will use different terms (“intuition based on experience”) but the end product is still the same. The other extreme in this scenario is represented by the Senior Vice President of Claims for a major insurance carrier, now deceased, who stopped his subordinate claims handler in the midst of writing a check for $750,000. “We’re going to do mock trials,” he said. When three juries came back at under $250,000, they came back to plaintiff counsel with a new position: “We’ll offer you $400,000 – take it or leave it.” They took it, and as a result the insurance carrier saved $350,000 in the process (minus the cost of the mock trial research -- about $40,000). What happened? With the certainty of valid science on its

11. 11 resolved. While counter-examples certainly can be found, the general conclusions, for the most part, in fact reflect what trial teams generally do (or fail to do) and what the promise of well-designed research truly holds, based on observations of thirty years of litigation and its use – or misuse -- of trial sciences. What Do The Clients and Their Trial Teams Really Want? The fate of jury consulting will ultimately hinge on what the litigators, their corporate clients, and the insurers actually want. If the focus is on saving money by the insurers and corporate clients by attempting to control monthly bills and short-term expenses, these decision-makers will be unlikely to utilize – or realize -- the types of long-term economic benefits afforded by well-conducted scientific research. While insurers and in-house counsel who manage their trial teams certainly do care about winning versus losing, they are typically evaluated based on their performance in suppressing short-term tangible costs. Minimizing settlement amounts or jury awards, based on scientific research, is not part of this calculus. When the rubber hits the road at decision time, the types of research expenses that can truly suppress the actual settlement figures or the probability of an adverse verdict are frequently rejected as “too expensive,” even though the expense of paying higher settlement amounts or jury awards down the road dwarfs the costs to minimize or prevent them when viewed from a longitudinal perspective. In the insurance industry, claims budgets and indemnity budgets are typically separated, and the costs of pre-trial research, like legal defense costs, are drawn out of the claims budget, while jury verdict awards or settlement amounts come out of the indemnity budget. But those who make the decisions on whether to use trial sciences are only evaluated based on how they handle their claims budgets. An insurance insider told me, “A lot of claims adjusters do not want to spend $50,000 out of a claims budget in order to save $200,000 from an indemnity budget.” So the claims adjuster will guess at a settlement amount in order to keep the claims budget low, rather then spending the amount it takes to conduct the research to scientifically ascertain the true value of the case and save money in the indemnity budget. There are other examples of disconnects in the insurance industry that lead to absurd results in settlement decisions – and wasted millions. Take the relationship between the insurance company and its reinsurance carrier. Reinsurers are much more accommodating on claims for actual jury awards than they are for settlement decisions; therefore, insurers will sometimes take a hit at the jury level to make sure that reinsurance will reimburse them instead of settling for a lower amount, because the reinsurer is more likely to question or second-guess a settlement. One person knowledgeable about the reinsurance industry told me that insurance companies “do not want to risk their positions with reinsurers. They say things like ‘I’d rather have a $5 million jury verdict than a $1 million settlement that the reinsurance company might deny’.” Why are reinsurance carriers prone to deny reimbursements of settlements? Because they are so frequently predicated on guesses, or “hunches.”

12. 12 At the trial team level, litigators may, at times, be motivated by a host of other factors besides an accurate knowledge of what the jury will ultimately award in damages. While it seems preposterous that the lawyers would be amenable to settling for an amount other than what a jury would actually award, a dispassionate analysis of the situations preceding most settlements clearly reveals that often there is no scientific basis for making a valid inference of probable damage awards in a given case (even though this information is knowable) and that the lawyers are not particularly concerned about this. The trial teams that do depend on trial sciences are hungry for objective information and are acutely concerned with what some of them call “breathing our own exhaust” as they work toward trial. These trial teams who utilize sound methodology for settling cases will likely view our “wasted money” exemplars as far-fetched, wild or extreme. Our years in the industry, however, suggest that such trial teams are in the minority. Close examination of most trial teams reveals that the emphasis from a planning and execution perspective is not on preparing every case for a win at the jury level, or even for getting reliable estimates of exposure; rather, the emphasis is placed on a myriad number of other factors connected with the client’s relationship to the law firm; the image of the litigants; nuisance factors; the perceived amount of time available; settlement posture and timing; and the associated billing fees and structures. The emphasis is not on winning; it’s on not losing. As in the legal malpractice example (“They don’t care”) it is not particularly important to anyone on the trial team whether the settlement amount is in fact “accurate.” The following was taken off one law firm’s website: Contrary to other litigation firms that fixate on billable-hour inventories, internal budgets and uncontrolled pretrial discovery, we focus on executing a winning trial strategy. Our goal is to win your case - and your confidence. The willingness to go to trial coupled with a proven trial record often delivers better results at the settlement table. When your adversaries know you are prepared to go to trial, the tone and direction of settlement discussions change. Winning - whether at the settlement table or in court - is a function of preparation and focus. Every lawyer will tell you he or she intends to win, but the truth is many attorneys simply are trying not to lose. [emphasis added] Trial sciences are only deemed necessary once winning is truly the goal. If “not losing” is instead the goal, valid research will never make it to the radar screen. When “not losing” is the goal, the only possible outcome is settlement for some amount of money that “seems reasonable” without subjecting that amount to objective study– and a hard honest look at what is happening in litigation today will reveal that this scenario comes closer to depicting the true state of affairs than any. Ironically, being “too busy” is one of the most oft- cited reasons for not doing the research, but if the case is to be decided by a jury, how can a credible case be made that there is no time to find out what the jury thinks? Being “busy,” of course, simply means that either the person has chosen to do something else, or that more help is needed. In

4. 4 of the “paperless courtroom” driven by electronic presentation systems, and associated hardware and software; and, (2) advanced techniques, methods, approaches and protocols for training witnesses. Getting the mock trial research “right” – achieving validity – therefore entails the same variables as getting the litigation effort “right” – that is, maximizing the likelihood of a favorable verdict. If the research is to simulate actual trial conditions – which it must to achieve validity – then the same essential determinants of the verdict decision must be present in the research as on the courtroom floor. Litigation is war, and like war, the battle occurs at multiple levels. Just as in war, a fighting force needs a navy, air force and an army on the ground, in litigation, one needs effective witnesses, persuasive graphics, and a compelling account of the themes, from opening statements through to the end of the trial, by an effective communicator. The goal of actually winning a trial therefore requires intervention at all levels, from witness training to theory development to creative graphics to having a real jury selection strategy in which research guides the development of favorable versus unfavorable juror profiles. Naturally, then, achieving validity in pre-trial research requires attention to these various determinants of the verdict decision, under the supervisorial eye of someone trained in research design and methodology so that various sources of artifact and bias can be eliminated from the research. In the use of pre-trial research, it is the implementation of solid mock trial research that illuminates the most effective themes and provides an assessment of likely outcome in terms of verdict and damages. It is also in the area of mock trial research – what it means, and how it should be conducted – that the most confusion and misunderstanding appears to reign among trial teams and their clients in present litigation efforts. One of the first questions that is typically addressed is whether mock trial research is, or even can be, valid at all. By “valid” we mean, in accordance with the prior definition, “Are the themes that the research show to be effective the same ones that real jurors will use in their decisions?” and, “Are the verdict and damages decisions by mock jurors accurate in terms of those that will occur in the real trial?” Over the years, mock trial validity has been shown to be somewhat like a three-legged stool in that there are three fundamental components that determine whether the goal of validity is achieved: 9 Do the elements of the stimuli presented to the mock jurors (arguments, themes, evidence, witnesses) truly reflect that which actual jurors will see and hear? 9 Are the jurors who are recruited to participate as mock jurors psychologically similar to those who will actually see and hear the case? and, 9 Are the measurement instruments and elements of research design implemented in a manner that eliminates research artifact and bias? Actual research results show that when these conditions are met, validity is in fact achieved. It is possible to accurately forecast trial results, when the trial team is dedicated enough to take the time to work with qualified researchers in order to implement the project in a meticulous manner. However, it took

14. 14 by now, this should be obvious as a clear case of putting the cart before the horse – not to mention the ethical problems cited previously arising from handing out money based on guesses -- even the certainty of knowing what a jury would really do with the case while bargaining at the mediation table is frequently worth the cost of conducting the research alone. The plaintiff who took $400,000 after already having previously fashioned a deal for $750,000 with the claims adjuster backed down as a result of the sheer interpersonal power wielded by the party who knew the truth behind the bluster and pomp of the mediation environment. Conclusions In the final analysis, those at the corporate level who would be in the best position to actually realize the cost effectiveness of scientific research are those who would genuinely be affected by sums paid out in settlement or the impact of large jury verdicts on the corporation. However, the executives at this rarefied level are not the ones making decisions as to whether the science ought to be utilized. Those who do make such decisions are evaluated by their superiors (at least in part) on their ability to minimize expenses on a monthly or quarterly basis, so the research is often never even considered. At the corporate level, generally those extending downward in the chain of command are rarely rewarded for wins at the jury level, or for accurately determining the lowest possible number for settlement, so there is no pressing motivation to ensure that these outcomes are realized. Instead, settlement amounts are often determined by legal staff at the corporate or insurance carrier level who do not even have substantial courtroom experience at all, not to mention the benefit of science. As a result, cases continue to be settled out of convenience, or for a myriad number of other reasons that overlook the very real possibility that millions of dollars might be needlessly wasted when it is not known what a real jury would actually do with the case. We have started now to see a few examples of lawsuits alleging malpractice for not conducting adequate research. At some point, it is likely that shareholders, reinsurers and others will begin to realize that it is their money that is being thrown away by guesses at settlement figures. At the very least, science can perform the much-needed task of justifying settlement decisions and protecting the decision-maker against claims by those whose money is actually being spent. Recommendations While undoubtedly some cases are simply too small or too minor to warrant the type of attention that has been described in this treatise, for cases that have significance to the company, the following steps are advised for trial teams: 9 Approach every case as though it were really going to trial. Identify the “story-teller” at the corporate level and other key witnesses so that they can be trained quickly to minimize risks arising from poor, early deposition performance. 9 Hire the jury consultant before discovery begins

15. 15 to ensure that there are no egregious problems with your witnesses. Conduct mock direct and cross examinations of your key witnesses in front of mock jurors to ensure that the witnesses are in optimal condition for deposition. 9 Ensure that adequate vetting has been conducted to verify the experience, credentials, and references of the jury consultant. 9 Do not hire jury consultants with less than ten years of experience, who do not provide impressive references, and who do not have proper credentials. An advanced degree in communication or psychology is essential – other fields (e.g., business, marketing, sociology) are unacceptable. 9 Once sufficient discovery has been conducted to thoroughly determine the fact scenario of the case, conduct a mock trial with the jury consultant and obtain exposure estimates by averaging results from multiple juries. 9 Make adjustments to the defense strategy based on the research results and attempt to determine, through additional mock trials if necessary, whether the case is winnable. 9 If the case does not appear to be winnable, attempt to settle it as early as possible using the damages estimates from the research as a benchmark or guideline during mediation. 9 Understand that winning at the trial level requires intervention on multiple levels, from witness training to creative approaches to graphics and demonstrative exhibits. Use the jury consultant to create a “multi-pronged” strategy consisting of coordinating witness testimony, opening statements, graphics, and a solid jury selection strategy. 9 Insist on knowing early what kind of jury selection strategy is in effect. If trial counsel is “guessing” at juror profiles, insist that the research be carried out to scientifically identify favorable versus unfavorable juror profiles, and that these results are incorporated into voir dire as well as a Supplemental Juror Questionnaire, if the court allows one to be used. 9 Consider the use of a shadow jury to monitor courtroom progress, if the case is going to court. 9 Know what the trial strategy is among the trial team, and insist that the consultant be present at meetings for formulation of strategy and trial planning to keep the attorneys informed of the research results that can guide critical decisions in this process.

Views

  • 318 Total Views
  • 231 Website Views
  • 87 Embedded Views

Actions

  • 0 Social Shares
  • 0 Likes
  • 0 Dislikes
  • 0 Comments

Share count

  • 0 Facebook
  • 0 Twitter
  • 0 LinkedIn
  • 0 Google+

Embeds 9

  • 1 www.courtroomsciences.com
  • 2 18.217.92.146
  • 2 www.csi-exch01.courtroomsciences.com
  • 2 courtroomscience.com
  • 1 avenel.courtroomsciences.com
  • 3 smtp.courtroomsciences.com
  • 1 crm.courtroomsciences.com
  • 2 www.smtp.courtroomsciences.com
  • 2 sawgrass.courtroomsciences.com