TAR first rose to prominence in the legal industry around 2011 under the name predictive coding. Predictive coding largely was abandoned in favor of the more generic term TAR – or, sometimes, computer-assisted review (CAR). TAR and CAR have also joined by TAR 2.0 and by CAL (continuous active learning), while LSI and PLSA are joined by SVM and other new acronyms – creating an alphabet soup.
TAR was already intimidating to practitioners back when it was just predictive coding. How do legal practitioners get a handle on Technology-Assisted Review key terms and concepts including workflows, mathematical approaches, indexes, maps and sampling. When leveraged successfully, a TAR process can evaluate and classify large volumes of ESI materials, use less time than human document review, reduce costs and generally achieve superior results.
Document review for production is the primary application of TAR but not the only application. You now have the choice of TAR 1.0 or 2.0. Now that we have established a coherent framework of terms and concepts for TAR, let’s discuss the contexts TAR can be applied, the relative aptitudes of TAR 1.0 and TAR 2.0, and the general effectiveness of Technology-Assisted Review in eDiscovery.
The year after technology-assisted review first rose to prominence in eDiscovery, Monique da Silva Moore, et al., v. Publicis Groupe SA & MSL Group (S.D.N.Y. Feb. 24, 2012) became the first case for judicial approval of TAR. Since then, numerous courts – both in the US, and abroad – addressed the same question and concluded that parties are allowed to use TAR approaches to meet their discovery obligations.
Not long after da Silva Moore became the first case in which the use of TAR was judicially approved, Kleen Products became the first case in which a requesting party tried to compel a responding party to utilize a TAR approach. Since that time, numerous courts have addressed the question and have concluded that one party cannot compel another to use TAR – but a judge might direct its use in certain situations.
We have seen in the cases where the decision of whether or not to use a TAR approach generally rests with the producing parties. Requesting parties cannot stop them from using a TAR approach if they wish to, or make them use a TAR approach if they don’t wish to. What if a requesting party has concerns, though, about the specifics of a producing party’s TAR process? When can TAR process objections be raised and what case law supports them?
As we have just seen, process objections are supposed to be based on actual deficiencies in actual results. So, what results are deficient enough to be objectionable? We have previously discussed the academic studies showing that traditional human review is far from perfect and that technology-assisted review approaches can be at least as effective, if not more so, but that doesn’t really answer the question. How good is good enough?
Another common question regarding the use of TAR approaches in discovery is whether process transparency is required. Process transparency in this context refers to transparency regarding how the chosen TAR approach is being deployed in the case. Although there is some variation in the cases, the short answer is that process transparency is preferred but is not typically required absent some demonstrated deficiency in the process’s results.
Due to the enormous volume of materials collected in some cases, parties may wish to use keyword searches followed by a TAR approach applied to the results. While search term filtering is not typically recommended by the developers of TAR tools and some parties have objected to their opponent’s use, cases have allowed parties to apply search term filtering to a document population before applying a TAR approach to the result.