Explore

Management Metrics for eDiscovery – Program Management Series, Part 6

Category:

A multi-part series on eDiscovery program management issues facing serial litigants, including readiness, resources, service providers, metrics, and more


In the first Part of this series, we reviewed the concept of program management (as distinguished from project management) and discussed its potential cost and risk reduction benefits.  In the second Part, we discussed the evaluation and improvement of organizational litigation readiness.  In the third Part, we discussed how to evaluate your existing needs and resources.  In the fourth Part, we discussed the available solution models.  In the fifth Part, we discussed the evaluation of eDiscovery service providers.  In this Part, we continue our review of program management issues with a look at tracking metrics for program management.

Metrics: Intra-Project vs. Inter-Project

During each eDiscovery project, there are countless opportunities for tracking key information about the project over time.  From monitoring responsiveness rates for different sources, to tracking overturn rates for different reviewers, intra-project metric monitoring can be done for both the materials in each project and the methods of execution employed for each project.  Source prioritization and reviewer quality control are just two examples of the many project management advantages that can be gained by capturing such metrics.

There are even greater advantages, however, to moving beyond intra-project metric monitoring to inter-project metric monitoring.  When the relevant project details are captured across multiple projects in a standardized, normalized way, they reveal additional insights invaluable for proactive program management within an organization.  Most importantly, tracking key metrics across projects enables you to establish benchmarks for the cost, time, and effectiveness of various tasks, approaches, and service providers (including yourselves).  And, against those benchmarks, goals for incremental, iterative improvement can then be set and achieved.

Example Project Metrics to Track

Which metrics will be most useful to your organization will depend, to some extent, on the types of matters and eDiscovery activities in which your organization is most often engaged, as well as on your balance of insourcing and outsourcing.  A variety of useful, general purpose metrics sets are available publicly.  For one good example from which to select metrics for your organization, see the EDRM Metrics Model.  Here, we will review examples of key project metrics to consider tracking for the realization of cross-project, program management benefits:

  • Metrics About Your Data and Its Collection
    • Sources, source types, and date ranges collected
    • Volumes and data types by source
    • Individual and departmental custodians collected
    • Volumes and data types by custodian
    • Collection methods and costs by source, type, custodian, etc.
  • Metrics About Your Processing
    • Platforms, tools, and utilities used
    • Throughput and error rates per platform, etc.
    • Processing cost and time variations per data type
    • Human versus machine hours required per data type
    • Filtering techniques employed during processing and their efficacy
  • Metrics About Your ECA and Culling
    • Platforms and tools used for analysis and culling
    • Techniques and features used for analysis and culling
    • Volume reductions achieved and time required to achieve them
    • Responsiveness rate during review (as a measure of overall culling efficacy)
  • Metrics About Your Review
    • Platforms and tools used for document review
    • Batch size and batch organization employed
    • Review workflow used (e.g., with or without dedicated redactors)
    • Use and effect of email threading or near-duplicate identification
    • Quality control time required and overturn rates observed
    • Time and process details for technology-assisted review, if used
  • Additional Project Metrics
    • Matter type and value at risk
    • Jurisdiction and presiding judge
    • Use of magistrate or special master for eDiscovery
    • Negotiated eDiscovery limits (e.g., agreed search terms)
    • Total eDiscovery costs and costs per document produced
    • Percentage of collected documents produced
    • Matter outcome achieved

And, as stated above, this list just contains key examples for each category.  Depending on your program management goals and your areas of focus, there are many additional, more granular metrics that might be tracked as well.

Example Benefits of Tracking Inter-Project Metrics

As noted above, the most important benefit of tracking key metrics across projects is that it enables you to establish data-based benchmarks that you can use for more reliable future cost estimation, for more useful assessment of individual project progress, for more meaningful evaluation of service providers, and for the implementation of iterative improvement goals.

Here are more specific examples of the kinds of benefits that can be realized by tracking the example metrics identified above:

  • Creation of more reliable estimates of collection needs and costs for future projects
  • Identification of overlapping or repeated collections where past work can be reused
  • Quantification of the speed and accuracy of your internal and external processing options
  • More accurate estimation of the time that will be required for future processing work
  • Measurement of the over- and under-inclusiveness of various filters during processing
  • Documentation of the most effective analysis and culling approach for each matter type
  • Determination of the most efficient review workflow and batch size for each matter type
  • Finding of the inflection point at which technology-assisted review is the right choice
  • Uncovering of trends across all organizational litigation to anticipate future needs
  • Comparison of overall cost, efficiency, and other factors across service providers

Not all organizations have sufficiently-frequent litigation to realize all of these benefits or to make tracking all of the above metrics worth it.  But for those organizations that are (or are becoming) serial litigants, there is no end to the benefits that come from replacing anecdotal evidence with data and engaging in data-driven decision-making about eDiscovery.


Upcoming in this Series

In the next Part of this series, we will continue our review of eDiscovery program management issues with a discussion of leveraging these metrics, and other steps, for ongoing program maintenance and improvement.


About the Author

Matthew Verga, JD
Director, Education and Content Marketing

Matthew Verga is an electronic discovery expert proficient at leveraging his legal experience as an attorney, his technical knowledge as a practitioner, and his skills as a communicator to make complex eDiscovery topics accessible to diverse audiences. A ten-year industry veteran, Matthew has worked across every phase of the EDRM and at every level from the project trenches to enterprise program design. He leverages this background to produce engaging educational content to empower practitioners at all levels with knowledge they can use to improve their projects, their careers, and their organizations.

Because you need to know

Contact Us