Explore

What TAR Process Results Are Objectionable?, Assisted Review Series Part 7

7 / 9

Over the nine years since it first rose to prominence in eDiscovery, technology-assisted review has expanded to include numerous new tools, more potential workflows, and a variety of legal issues

In “Alphabet Soup: TAR, CAL, and Assisted Review,” we discussed TAR’s rise to prominence and the challenges that it has created for practitioners.  In “Key Terms and Concepts,” we discussed the key terms and concepts practitioners need to know.  In “Applications, Aptitudes, and Effectiveness,” we discussed use cases, relative merits, and overall efficacy.  In “Are You Allowed to Use TAR?,” we discussed case law on authorization to use TAR.  In “Can You Be Compelled to Use TAR?,” we discussed case law on being compelling or directed to use TAR.  In “When Is the Right Time for TAR Process Objections?,” we discussed case law on objection timing.  In this Part, we review case law discussing what results are objectionable.


As we saw in “When is the Right Time for TAR Process Objections”, process objections are supposed to be based on actual deficiencies in actual results.  So, what results are deficient enough to be objectionable?  We previously discussed in “Applications, Aptitudes, and Effectiveness” the academic studies showing that traditional human review is far from perfect and that technology-assisted review approaches can be at least as effective, if not more so, but that doesn’t really answer the question.  How good is good enough?

TAR Approaches: How Good Is Good Enough?

Unfortunately, the wide range of TAR approaches, their extensive customizability, and the myriad situations and data sets to which they might be applied mean that there can be no bright line rule for when results are adequate.  The only real standards are reasonableness and proportionality:

  • Reasonableness – There is an oft-repeated refrain in discovery that the expectation is reasonableness not perfection. This is drawn from the requirement in Federal Rule of Civil Procedure 26(g) that a “reasonable inquiry” be made to ensure the completeness of discovery responses, as well as from the case law addressing this issue.  As articulated by the Judge in Dynamo, for example:

The second myth is the myth of a perfect response.  The [respondent] is seeking a perfect response to his discovery request, but our Rules do not require a perfect response. . . . 

Likewise, “the Federal Rules of Civil Procedure do not require perfection.”  Like the Tax Court Rules, the Federal Rule of Civil Procedure 26(g) only requires a party to make a “reasonable inquiry” when making discovery responses.  [internal citation omitted; emphasis added] 

Thus, if the TAR approach in question was a reasonable inquiry, its results will generally be deemed acceptable even if imperfect.  The reasonableness of what has been done – or of what extra work is being requested – is evaluated using a proportionality standard.

  • Proportionality – As we discussed, Magistrate Judge Peck emphasized in da Silva Moore the importance of applying a post-hoc, fact-based proportionality analysis to assessing the adequacy of a discovery process and its results. He also lists several examples of the factors he would consider in the context of a TAR process including total documents, actual costs, the significance of any missed materials, and more.  In Hyles, he wrote:

While [the plaintiff] may well be correct that production using keywords may not be as complete as it would be if TAR were used, the standard is not perfection, or using the “best” tool, but whether the search results are reasonable and proportional.  [internal citations omitted, emphasis added]

A deeper dive into proportionality in eDiscovery and the cases discussing it can be found in the article series, white paper, and recorded webinar here.

Cases Discussing Results

There have not been many reported cases in which assisted review results were reported, disputed, or discussed on the record, but here are some examples:

  • In Dynamo, which we also previously discussed, the TAR process employed resulted in the production of around 180,000 responsive documents that contained a few thousand relevant documents. After review of the produced materials, the respondent argued that the production was incomplete and sought to compel the production of additional search term results it believed were also relevant but that had not been found by the TAR approach.  The Judge, however, was not convinced that more discovery was warranted because (a) the standard for a production’s completeness is reasonableness not perfection and (b) “it is inappropriate to hold TAR [] to a higher standard than keywords or manual review” (quoting Rio Tinto PLC v. Vale, S.A., et al., No. 14 Civ. 3042 (RMB) (AJP) (S.D.N.Y. Mar. 2, 2015)).  The Judge concluded:

There is no question that petitioners satisfied our Rules when they responded using predictive coding.  Petitioners provided the Commissioner with seed sets of documents from the backup tapes, and the Commissioner determined which documents were relevant.  That selection was used to develop the predictive coding algorithm.  After the predictive coding algorithm was applied to the backup tapes, petitioners provided the Commissioner with the production set.  Thus, it is clear that petitioners satisfied our Rules with their response.  Petitioners made a reasonable inquiry in responding to the Commissioner’s discovery demands when they used predictive coding to produce any documents that the algorithm determined was responsive, and petitioners’ response was complete when they produced those documents.  [emphasis added]

  • In In re Domestic Airline Travel Antitrust Litig., No. 15-1404 (D.D.C. Sept. 13, 2018), one defendant’s TAR process produced results with very low precision.  They had been aiming for a recall rate of at least 75% and a “reasonable level” of precision.  Instead, their process resulted in a recall rate of 97.4% and precision of only 16.7%.  Out of about 3,500,000 documents produced, only about 600,000 were relevant – vast overinclusion.  The Judge does not directly address the reasonableness of this result {“. . . [the defendant] spends a good deal of time on its argument that its precision level and the resulting document production are reasonable, but that argument is irrelevant to the issue at hand . . .”), but she does conclude that it entitles the plaintiffs to an extension of the discovery schedule so they have time to adequately sift through all the provided materials.
  • In Winfield, which we also previously discussed, the plaintiffs alleged that the defendant’s TAR process was deficient due to improper training, resulting in some improper document designations and an incomplete production. To evaluate these claims, the Magistrate Judge conducted an in camera review of process documentation and document samples from the defendant, and found (a) that the effort had been conscientious and thorough and (b) that only a few documents out of many had actually been miscategorized.  She concluded that this was insufficient evidence of deficiency to invalidate the defendant’s reasonable efforts:

. . . this Court does not find the labeling of these 20 documents, only 5 of which were “incorrectly” categorized as non-responsive during the initial ESI review – out of the 100,000 documents that have been reviewed thus far in this case – sufficient to question the accuracy and reliability of the [defendant’s] TAR process as a whole.  [emphasis added]


Upcoming in this Series

In the next Part, we will continue our discussion of assisted review with a look at some of the case law addressing process transparency.


About the Author

Matthew Verga

Director of Education

Matthew Verga is an electronic discovery expert proficient at leveraging his legal experience as an attorney, his technical knowledge as a practitioner, and his skills as a communicator to make complex eDiscovery topics accessible to diverse audiences. A thirteen-year industry veteran, Matthew has worked across every phase of the EDRM and at every level from the project trenches to enterprise program design. He leverages this background to produce engaging educational content to empower practitioners at all levels with knowledge they can use to improve their projects, their careers, and their organizations.

Whether you prefer email, text or carrier pigeons, we’re always available.

Discovery starts with listening.

(877) 545-XACT / or / Subscribe for Updates