A multi-part series providing guidance on how to effectively scope and plan eDiscovery projects
In the first Part of this series, we reviewed the value of preparation, planning, and checklists, as well as the evolving challenges and expectations associated with eDiscovery project planning. In the second Part, we discussed the initial eDiscovery project scoping steps you must take. In the third Part we discussed some of the investigative steps that can follow, including targeted interviews and reactive data mapping. In this Part, we continue our review of investigative options with surveying and sampling.
As we discussed in the last Part, the investigative options you need to undertake to test your project assumptions will depend on the specifics of your project, especially on its scale. Larger or more complex projects will require more – and more ambitious – investigative efforts. Surveying and sampling are particularly useful and important in projects that feature a large number of potential custodians or other sources.
Surveying in this context, like targeted interviews, is part and parcel of what would normally be your full, pre-collection custodian interview process (“…planning an eDiscovery project is an iterative process that overlaps and intersects with other early project activities.”). And, like targeted interviews, it is worth thinking about surveying as more than just collection planning. Beyond just documenting what technology devices or other materials a custodian has, surveying can help you test all of your assumptions about what exists, how/when/why it’s generated, where and in what formats it is, what distinctive characteristics it bears, what priority it should have, who else might have it too, and more.
Surveying can be accomplished in a variety of mechanical ways, ranging from passing out and collecting paper forms to custom-building a secure web survey. In between those extremes, there are electronic forms built in spreadsheets or PDF documents. Leveraging PDF forms has some advantages, because the forms can be easily locked from editing outside the input fields, multiple field types can be used (e.g., checkboxes, radio buttons, free text entry), and responses can be easily extracted and aggregated using Adobe tools. Additional options include less-secure, third-party web survey tools and survey features included in some enterprise hold management software.
Getting a survey created and distributed takes a bit more lead time and costs a bit more money than just conducting a few targeted interviews, but the benefits for the right project make it well worth it, because once created, a survey scales freely across any number of custodians. Imagine needing to gather information from dozens (or hundreds) of custodians scattered across the country (or world) in different offices and departments. Individual interviews would, if done targeted, provide you with too little information about what’s potentially out there to plan confidently, and individual interviews would, if done completely, take an enormous commitment of people, time, and money. In such situations, surveys – at least as an initial data gathering activity from which targeted follow-up interviews can be planned – strike an ideal balance.
The final investigative option we’ll review in this series is sampling. In the land of eDiscovery, “sampling” is used to refer to both judgmental and statistical sampling. In this early project planning phase, both kinds of sampling can be useful.
Judgmental sampling is the informal process of looking at parts of something large to get an anecdotal sense of the whole (for example, attorneys running a variety of instinctively-selected search terms in a new document database to familiarize themselves with what’s there). Judgmental sampling is also what you’re doing when you select key individuals for targeted interviews, using them as proxies for the whole list of hypothetical custodians.
More importantly, though, judgmental sampling is a way to learn about what’s on sources and systems that, unlike custodians, cannot self-report to you. This kind of judgmental sampling might take a variety of forms, such as:
Statistical sampling is the more formal process of taking simple random samples of sufficient size to reliably estimate properties of the whole set. For example, reviewing and coding 2,400 randomly selected documents from a million-document set to estimate, with a confidence level of 95% and a confidence interval of +/-2%, how much of the total is relevant (or privileged, etc.).
Depending on your project’s scale and timeline, you may proceed from judgmental sampling to statistical sampling by loading and coding formal samples from your initial, test collections. If you need to say with certainty whether a category of devices or sources is worth pursuing further, formal sampling of one or more devices’ contents can provide that certainty, and if those contents are voluminous, formal sampling will do so far faster than broad review.
These sampling techniques are especially important in this era of increased focus on proportionality. Negotiations with opposing parties often happen in parallel with internal project planning and other early activities, and negotiations about the appropriate scope and scale of discovery are always more effective when assumptions can be backed up (or disproved) by actual facts and examples. This provides an additional use for, and benefit from, your investigative efforts. For years, judges have been emphasizing to parties the importance of using sampling to flesh out facts about what is and isn’t actually there instead of fighting over theories about what might be. See e.g., Pippins vs. KPMG, Case No. 11 Civ. 377 (CM)(JLC) (S.D.N.Y. 2012).
As we noted in the first Part, checklists are invaluable tools for ensuring consistency and completeness in your efforts. Here are model checklists for targeted surveying and sampling, which you can customize for your organization:
Upcoming in this Series
In the next Part of this series, Volume Estimation and Downstream Planning, we will continue our discussion of eDiscovery project planning by moving on from investigation activities to volume estimation and downstream planning.
About the Author
Matthew Verga, JD
Director, Education and Content Marketing
Matthew Verga is an electronic discovery expert proficient at leveraging his legal experience as an attorney, his technical knowledge as a practitioner, and his skills as a communicator to make complex eDiscovery topics accessible to diverse audiences. A ten-year industry veteran, Matthew has worked across every phase of the EDRM and at every level from the project trenches to enterprise program design. He leverages this background to produce engaging educational content to empower practitioners at all levels with knowledge they can use to improve their projects, their careers, and their organizations.