Over the nine years since it first rose to prominence in eDiscovery, technology-assisted review has expanded to include numerous new tools, more potential workflows, and a variety of legal issues
In “Alphabet Soup: TAR, CAL, and Assisted Review,” we discussed TAR’s rise to prominence and the challenges that it has created for practitioners. In “Key Terms and Concepts,” we discussed the key terms and concepts practitioners need to know. In “Applications, Aptitudes, and Effectiveness,” we discussed use cases, relative merits, and overall efficacy. In “Are You Allowed to Use TAR?,” we discussed case law on authorization to use TAR. In “Can You Be Compelled to Use TAR?,” we discussed case law on being compelling or directed to use TAR. In “When Is the Right Time for TAR Process Objections?,” we discussed case law on objection timing. In this Part, we review case law discussing what results are objectionable.
As we saw in “When is the Right Time for TAR Process Objections”, process objections are supposed to be based on actual deficiencies in actual results. So, what results are deficient enough to be objectionable? We previously discussed in “Applications, Aptitudes, and Effectiveness” the academic studies showing that traditional human review is far from perfect and that technology-assisted review approaches can be at least as effective, if not more so, but that doesn’t really answer the question. How good is good enough?
Unfortunately, the wide range of TAR approaches, their extensive customizability, and the myriad situations and data sets to which they might be applied mean that there can be no bright line rule for when results are adequate. The only real standards are reasonableness and proportionality:
The second myth is the myth of a perfect response. The [respondent] is seeking a perfect response to his discovery request, but our Rules do not require a perfect response. . . .
Likewise, “the Federal Rules of Civil Procedure do not require perfection.” Like the Tax Court Rules, the Federal Rule of Civil Procedure 26(g) only requires a party to make a “reasonable inquiry” when making discovery responses. [internal citation omitted; emphasis added]
Thus, if the TAR approach in question was a reasonable inquiry, its results will generally be deemed acceptable even if imperfect. The reasonableness of what has been done – or of what extra work is being requested – is evaluated using a proportionality standard.
While [the plaintiff] may well be correct that production using keywords may not be as complete as it would be if TAR were used, the standard is not perfection, or using the “best” tool, but whether the search results are reasonable and proportional. [internal citations omitted, emphasis added]
A deeper dive into proportionality in eDiscovery and the cases discussing it can be found in the article series, white paper, and recorded webinar here.
There have not been many reported cases in which assisted review results were reported, disputed, or discussed on the record, but here are some examples:
There is no question that petitioners satisfied our Rules when they responded using predictive coding. Petitioners provided the Commissioner with seed sets of documents from the backup tapes, and the Commissioner determined which documents were relevant. That selection was used to develop the predictive coding algorithm. After the predictive coding algorithm was applied to the backup tapes, petitioners provided the Commissioner with the production set. Thus, it is clear that petitioners satisfied our Rules with their response. Petitioners made a reasonable inquiry in responding to the Commissioner’s discovery demands when they used predictive coding to produce any documents that the algorithm determined was responsive, and petitioners’ response was complete when they produced those documents. [emphasis added]
. . . this Court does not find the labeling of these 20 documents, only 5 of which were “incorrectly” categorized as non-responsive during the initial ESI review – out of the 100,000 documents that have been reviewed thus far in this case – sufficient to question the accuracy and reliability of the [defendant’s] TAR process as a whole. [emphasis added]
Upcoming in this Series
In the next Part, we will continue our discussion of assisted review with a look at some of the case law addressing process transparency.
Whether you prefer email, text or carrier pigeons, we’re always available.
Discovery starts with listening.