Explore

Aligning Your Lenses to See through the Fog, ECA Fundamentals Series Part 6

6 / 6

A multi-part series on the fundamentals eDiscovery practitioners need to know about effective early case assessment in the context of electronic discovery

In “Clearing the Fog of War,” we reviewed the uncertainty inherent in new matters and the three overlapping goals of ECA.  In “Sampling a Well-Stocked Toolkit,” we began our survey of available tools and techniques with an overview and with a discussion of sampling.  In “Searching and Filtering for Fun and Profit,” we continued our survey with a discussion of searching and filtering options.  In “Threading, Duplicates & Near-Duplicates,” we turned our attention to tools for handling threading and duplicates.  In “Advanced Analytic Tools,” we conclude our survey of tools and techniques with a review of advanced analytic tools and TAR workflows.  In this final Part, we discuss how to bring these options together. 


Our survey of tools and techniques for early case assessment has revealed a wide range of available options, each with different strengths and intended applications, but achieving effective ECA is not a question of applying as many of these tools and techniques as you can.  Rather, it is a question of selecting the right ones to best serve your primary goal – whether that’s Traditional ECA, EDA, or Downstream Prep – and then building on those initial steps in a rational way to eventually achieve all three goals over the course of your ECA efforts.

To return to our earlier analogies, it is a question of aligning the right lenses, in the right order, to peer through the fog of war and bring your informational quarry into sharp focus.

For Pursuing Traditional ECA

When your top priority is pursuing the Traditional ECA goal, the first question to ask yourself is how much knowledge you have of what you expect to find.  If you know a lot about what you’re looking for in your ESI (e.g., from thorough custodian interviews, from overlap with prior legal matters, etc.), you may be able to jump right to searching for it.  If you don’t know a lot about the materials you’re seeking, which is more common, you will want to start with one or more of the tools and techniques best suited to revealing unknown unknowns:

  • Formal random sampling to estimate prevalence, which lets you see a cross-section of everything you have and some of all the different language your custodians use to help you better plan your next ECA steps
  • Semantic indexing features, which let you use concept searching to find relevant materials without knowing the best search terms, concept clustering to explore a cross-section of topics, and categorization to use a few relevant examples to find more
  • TAR 2.0 workflows, which can rapidly surface relevant materials in certain circumstances
  • AI analysis and visualization tools, which can reveal patterns of communication and behavior and assist with completing the picture of what happened in other ways

Once you start to get a handle on what you are really seeking (or if you already knew), you can transition from these initial, exploratory efforts to more targeted search and filtering efforts, which can quickly find relevant materials and hot documents.  And, as you find relevant materials to review, thread and duplicate management tools can be used to find related materials to review for context as needed (e.g., related emails, alternate drafts, etc.).

For Pursuing EDA

If your top priority is pursuing the EDA goal, finding individual documents and facts is less important than ensuring sufficiently complete collection has taken place and that any filtering applied during processing has not been excessive.  In such situations, your focus should be on tools and techniques that help you see the big picture of your ESI collection and reveal the gaps within it:

  • Metadata filtering and visualization tools, which help you assess the completeness of your collection by revealing ranges of values and gaps in those ranges, as well as potentially revealing important date ranges and sources, the connections between custodians, and more
  • Concept clustering, which can provide a valuable overview of the content types and topics within your materials, including revealing an absence of things you expected or the presence of things you don’t need
  • AI analysis and visualization tools, which can reveal collection gaps through communication mapping and other unsupervised computer analyses
  • Thread and duplicate management tools, which can provide another way to map conversation threads to reveal gaps requiring further collection, or which can reveal the presence excessive near-duplicates suggesting a collection or processing issue

Formal random sampling can also be useful during EDA, particularly if there are disputes over the appropriate scope of preservation and collection that need to be resolved.  Sampling to estimate prevalence can be used to apply relative value determinations to different sources and tranches and to estimate costs and benefits associated with specific proposed work.

For Pursuing Downstream Prep

When your top priority is pursuing the Downstream Prep goal, you are concerned with learning about what happened, but only insofar as that informs what must be reviewed later and how it should be prioritized.  And, you are concerned with understanding the properties and the big picture of the ESI you’ve collected, but only insofar as that informs what tools and techniques for culling you should choose and what review methodologies are likely to be effective.  All of the tools and techniques discussed so far can be leveraged to assist in the Downstream Prep effort:

  • Formal random sampling to test classifiers, which allows you to iteratively improve any searches you plan to apply for culling, to ensure that they minimize unnecessary downstream review work and that they avoid missing any important materials
  • Searching and metadata filtering, which can both be leveraged to eliminate as much of the chaff as possible without losing an unreasonable amount of the wheat, thereby reducing all downstream review and production costs
  • Thread and duplicate management tools, which can dramatically speed up later review work, both by eliminating materials not requiring review and by providing superior organization to what remains
  • Semantic indexing features, which can let you use concept clustering to help organize and prioritize subsequent review activity or let you leverage TAR workflows

For Assistance or More Information

Xact Data Discovery (XDD) is a leading international provider of eDiscovery, data management and managed review services for law firms and corporations.  XDD helps clients optimize their eDiscovery matters by orchestrating precision communication between people, processes, technology and data.  XDD services include forensicseDiscovery processingRelativity hosting and managed review.

XDD offers exceptional customer service with a commitment to responsive, transparent and timely communication to ensure clients remain informed throughout the entire discovery life cycle.  At XDD, communication is everything – because you need to know.  Engage with XDD, we’re ready to listen.


About the Author

Matthew Verga

Director, Education and Content Marketing

Matthew Verga is an electronic discovery expert proficient at leveraging his legal experience as an attorney, his technical knowledge as a practitioner, and his skills as a communicator to make complex eDiscovery topics accessible to diverse audiences. An twelve-year industry veteran, Matthew has worked across every phase of the EDRM and at every level from the project trenches to enterprise program design. He leverages this background to produce engaging educational content to empower practitioners at all levels with knowledge they can use to improve their projects, their careers, and their organizations.

Whether you prefer email, text or carrier pigeons, we’re always available.

Discovery starts with listening.

(877) 545-XACT / or / Complete Contact Form