Explore

Still Crazy after All These Years, Technology-Assisted Review Series Part 1

Seven years after it first rose to prominence in eDiscovery, technology-assisted review remains an important, and at times controversial, tool in the eDiscovery practitioner’s toolkit


Technology-assisted review (“TAR”) first rose to prominence in the legal industry around 2011 under the name predictive coding.  In that year, the first few discovery-oriented TAR solutions were in use, Recommind received a patent for its predictive coding process, and discussion of the technology’s potential to transform legal practice spread from industry press to mainstream media outlets.

Despite all of the enthusiasm at the beginning, however, TAR’s rate of adoption has remained slower than expected over the last seven years.  As one litigation partner recently told Legaltech News: “It’s not ubiquitous yet.  It’s being used only in a small minority of cases, but it is being used more, which is a good thing from my perspective.

Slow though it may be, signs of TAR’s continued growth and importance abound:

Relevant case law too, has continued to accumulate.  From 2012’s da Silva Moore, to 2017’s Winfield and beyond, courts have wrestled with whether and when TAR is okay, if it can be required, what processes should be employed when it is used, and more.  In this series, we will survey the case law in this area to provide you with the available guidance from the courts.

For Any Unfamiliar

Before we jump into our case law survey, I want to provide a brief, high-level explanation of TAR for any readers who may be unfamiliar with it.  TAR refers to any of several workflows in which humans’ classifications of documents (e.g., as relevant or not relevant) are used to guide additional document classifications performed by software rather than humans.  Those software classifications may be presented as binary results (e.g., relevant vs. not relevant) or as probability scores (e.g., 85% certain this is relevant).  Such classifications are not based on exact text matches like keyword searches, but on complex semantic analysis of the language in documents.

Most such workflows are iterative, with several rounds in which the software’s classifications are evaluated by humans to further improve the accuracy of those classifications.  Most also include a variety of sampling steps to estimate prevalence, to measure the workflow’s efficacy, and to test for missed materials.  The two criteria most often measured and discussed are recall, which measures how much of the total relevant material was found, and precision, which measures how much irrelevant material was included with the relevant.

When leveraged successfully, a TAR process can evaluate and classify a large volume of ESI materials using substantially fewer hours of manual, human document review than other approaches, thereby reducing cost and time, while generally achieving superior results.  It is not, however, suitable for all reviews.  Small reviews, complex reviews, or reviews heavy in multimedia, for example, would all generally be better off approached in other ways.


Upcoming in this Series

In the next Part of this series, we will review 2012’s da Silva Moore, which first approved the use of TAR in a published order.


About the Author

Matthew Verga, JD
Director, Education and Content Marketing

Matthew Verga is an electronic discovery expert proficient at leveraging his legal experience as an attorney, his technical knowledge as a practitioner, and his skills as a communicator to make complex eDiscovery topics accessible to diverse audiences. An eleven-year industry veteran, Matthew has worked across every phase of the EDRM and at every level from the project trenches to enterprise program design. He leverages this background to produce engaging educational content to empower practitioners at all levels with knowledge they can use to improve their projects, their careers, and their organizations.

Because you need to know

Contact Us