Explore

Applications, Aptitudes, and Effectiveness, Assisted Review Series Part 3

3 / 9

Over the nine years since it first rose to prominence in eDiscovery, technology-assisted review has expanded to include numerous new tools, more potential workflows, and a variety of legal issues

In “Alphabet Soup: TAR, CAL, and Assisted Review,” we discussed TAR’s rise to prominence and the challenges that it has created for practitioners.  In “Key Terms and Concepts,” we discussed the key terms and concepts practitioners need to know.  In this Part, we review the applications, aptitudes, and effectiveness of TAR approaches.


Now that we have established a coherent framework of terms and concepts for discussing TAR, let’s discuss the contexts in which TAR can be applied, the relative aptitudes of TAR 1.0 and TAR 2.0, and the general effectiveness of TAR as an approach.

Applications of TAR

Review of documents for potential production during discovery is the primary application of TAR workflows, but it is not the only application.  TAR tools and workflows may also be leveraged in other contexts, such as early case assessment (ECA).  Even if a party is uncomfortable relying upon TAR to decide what gets reviewed for production, they might still leverage TAR to organize and prioritize their document collection for a more traditional review process or to create a quality control yardstick against which to measure a more traditional review process.

TAR approaches are also valuable options in investigations.  In the context of an internal investigation, there is no need to be concerned about another party objecting to your TAR use or to specifics of your TAR workflow, allowing you to take advantage of TAR’s greater speed and efficiency worry-free.  Many federal agencies are also now comfortable with TAR being used for responses to their investigatory requests (although methodology details generally have to be provided to the agency to secure approval, and document samples are sometimes required).

Aptitudes of TAR 1.0 and 2.0

Because you will now generally have the option of choosing between TAR 1.0 and TAR 2.0 approaches for your projects, it is important to understand their relative aptitudes:

TAR 1.0

  • TAR 1.0 workflows are based, primarily, on a mathematical approach that focus on the similarities between documents and that is good at finding related clusters. This makes TAR 1.0 well-suited to projects in which you wish to use known relevant materials to train the system, or in which you wish to classify documents into more than two categories.
  • TAR 1.0 workflows do take longer to initially set up, but that additional work also provides more statistical information up front about your document collection, its richness, and the work required, which makes it well-suited to planning and managing larger projects.

TAR 2.0

  • TAR 2.0 workflows are based, primarily, on a mathematical approach that focuses on the differences between documents and that is good at handling edge cases between two categories. This makes TAR 2.0 well-suited to projects in which you only wish to classify documents into two clear categories (e.g., relevant and not relevant), and in which you are comfortable letting the system choose the materials used to train it.
  • TAR 2.0 workflows do not require the advance creation of training and control sets during set up, which makes them well-suited to matters in which speed is paramount and in which you are comfortable knowing less up front about expected richness or required work. TAR 2.0 workflows also have greater tolerance for minor inconsistencies in training decisions, which makes it possible to use somewhat larger training teams, further adding to the speed and ease of set up.
  • TAR 2.0 workflows’ greater speed and ease can also result in lower costs, which makes them suitable for some matters that are too small to benefit from TAR 1.0 workflows.

Effectiveness of Technology-Assisted Review

The next obvious question is: how effective are TAR approaches?  The short answer is: when used correctly, at least as effective as traditional human review.  This is, in part, because TAR is good and, in part, because traditional human review is not as perfect as practitioners assume.

The Sedona Conference’s Best Practices Commentary on the Use of Search and Information Retrieval Methods in E-Discovery describes a persistent myth in eDiscovery:

It is not possible to discuss this issue without noting that there appears to be a myth that manual review by humans of large amounts of information is as accurate and complete as possible – perhaps even perfect – and constitutes the gold standard by which all searches should be measured.

The reality is quite different from this myth.  In reality, even the best reviewers make numerous mistakes due to simple human fallibility, and reviewers frequently come to different conclusions regarding questions of relevance, privilege, and more.  Studies have shown surprisingly low consistency between the independent results of equivalent review teams (“Assessor Overlap”):

  • In the study Variations in Relevance Judgments and the Measurement of Retrieval Effectiveness, three teams of reviewers independently assessed approximately 13,500 documents and no two teams overlapped more than 49.4%.
  • In the study Document Categorization in Legal Electronic Discovery, two teams of reviewers independently assessed 5,000 documents previously assessed in an actual case and neither team had greater overlap with the original production than 16.3% (and had overlap with each other of only 28.1%).

In 2011, a seminal journal article was published titled Technology-Assisted Review in E-Discovery Can Be More Effective and More Efficient Than Exhaustive Manual Review.  This article examined the results of the 2009 Text Retrieval Conference’s Legal Track Interactive Task to see how TAR approaches compared to traditional approaches, and it found (a) that human review was far from perfect and (b) that TAR was as good or better, particularly with regard to precision:

  • The quantitative results show that the recall of the manual reviews varies from about 25% (Topic 203) to about 80% (Topic 202). That is, human assessors missed between 20% and 75% of all relevant documents.”  [emphasis added]
  • The vast majority of missed documents are attributable either to inarguable error or to misinterpretation of the definition of relevance (interpretive error). Remarkably, the findings identify only 4% of all errors as arguable.”  [emphasis added]
  • . . . by all measures, the average efficiency and effectiveness of the five technology-assisted reviews surpasses that of the five manual reviews. The technology-assisted reviews require, on average, human review of only 1.9% of the documents, a fifty-fold savings over exhaustive manual review.”  [emphasis added]
  • Overall, the myth that exhaustive manual review is the most effective – and therefore, the most defensible – approach to document review is strongly refuted. Technology-assisted review can (and does) yield more accurate results than exhaustive manual review, with much lower effort.”  [emphasis added]

And, as we will see, TAR has since been deemed adequately effective – both in theory and in practice – in a variety of cases.


Upcoming in this Series

In the next Part, we will continue our discussion of assisted review with a look at some of the case law addressing whether parties are allowed to use TAR.


About the Author

Matthew Verga

Director of Education

Matthew Verga is an electronic discovery expert proficient at leveraging his legal experience as an attorney, his technical knowledge as a practitioner, and his skills as a communicator to make complex eDiscovery topics accessible to diverse audiences. A thirteen-year industry veteran, Matthew has worked across every phase of the EDRM and at every level from the project trenches to enterprise program design. He leverages this background to produce engaging educational content to empower practitioners at all levels with knowledge they can use to improve their projects, their careers, and their organizations.

Whether you prefer email, text or carrier pigeons, we’re always available.

Discovery starts with listening.

(877) 545-XACT / or / Subscribe for Updates