Putting Statistics to Work in eDiscovery: Use Cases for Incorporating Statistical Sampling and Analysis

Home » News and Insights » Insights » Putting Statistics to Work in eDiscovery: Use Cases for Incorporating Statistical Sampling and Analysis

Incorporating Statistical Sampling and Statistical Analysis in eDiscovery

In two prior posts, I first made the case that all litigators need to understand some basic statistics, and then provided a primer on the key statistical concepts they should know. In this final post in the series, I suggest some of the best opportunities for incorporating statistical sampling and analysis into discovery efforts.

Improve early case assessment and strategy development.

By reviewing relatively small but statistically representative samples of documents from the different sources and custodians in a matter, a party can more effectively conduct early case assessment and begin to develop the case strategy. For example, sampling can enable the litigant to efficiently hone in on the sources and custodians of information most likely to be relevant. Sampling can also help a producing party assess the burdens and costs involved in accessing certain information, such as backup tapes or other offline media. An early sampling review also will provide an estimate of the “richness” or “prevalence” of populations – i.e., the percentage of responsive documents in the collection – before undertaking review, which can inform the strategy and workflow for the review.

Assess the culling of a data set to ensure that your cull is neither over-broad nor too restrictive.

Collections of documents can be culled to eliminate irrelevant data with tried-and-true methodologies such as domain and file type exclusions, date filters, and keyword searches, or with more cutting edge techniques such as predictive coding and other advanced analytics. Often parties use a combination of approaches. Regardless of the means of accomplishing it, litigants must ensure that the cull neither eliminates too many relevant documents nor brings in too many irrelevant documents. (In statistical terms, “recall” is the percentage of total relevant documents in the collection that remain after the cull, and “precision” is the percentage of documents in the culled-down set that are in fact relevant.) Using statistical sampling, the party can test its proposed methodology and protect against both under- and over-inclusive culling.

Measure the efficacy of search terms and refine the terms.

Keyword-based searches are used in discovery in various ways beyond the initial culling of a data set. Whatever the purpose of the search, it should optimize recall and precision – the search should do a reasonable job of finding what’s being searched for but leaving behind what’s not. The only way to accomplish this reliably is through statistical sampling and analysis. Using statistical measurements, a party can try different sets of search terms, and refine the terms to maximize precision and recall.

Verify the accuracy of a predictive coding process.

For parties using predictive coding in ediscovery, statistical measurements are indispensible for ensuring that the technology produces an accurate, defensible result. Whether the tool is being used to cull a document set, prioritize review, replace human review decisions, or merely assist in early case assessment, the output generated by the software must be verified for accuracy through statistical analysis.

Test automated methods of screening documents for privilege and confidentiality.

As with document culling and keyword searching, any automated methods used to screen documents for potentially privileged, proprietary, or confidential content should be tested using statistical sampling and measurement. If a filter allows privileged documents to slip through, the consequences can be catastrophic. While no means of screening for privilege is 100 percent perfect, statistical testing of the screen can bring greater confidence in the accuracy of the outcome.

Sample a document production before it “goes out the door” to provide additional assurance that privileged content is not inadvertently included.

Federal Rule of Evidence 502(b) provides certain protections for an inadvertent production of privileged information, provided that the producing party took “reasonable steps” to prevent the disclosure and keep the privileged information safe. In addition to statistical validation of privilege screens (which certainly qualifies as a reasonable step to prevent disclosure), a party might also consider sampling a statistically significant number of documents in a final production to ensure that no privileged content accidentally found its way into the production.

Support proportionality arguments.

Sampling can help a litigant determine whether the cost of reviewing certain types or sources of information is reasonable and proportional. By sampling from a collection, the party can assess the “bang for the buck” in reviewing a particular set of documents, based on how many (and what kinds of) responsive documents are estimated to be found. Armed with statistical evidence, the party can argue that it should not be required to spend time and money reviewing a custodian’s documents if the collection has very low prevalence. Similarly, it could assert that it is not worth the expense to continue reviewing more documents from more custodians if that additional effort is not likely to yield significantly more relevant information in addition to what already has been identified and produced.

Conduct quality control and quality assurance of human review efforts.

Statistics can be used to measure the error rate of document review decisions for an overall project, or for particular reviewers. When done in real time as the project progresses, this error rate measurement can be part of an effective quality control (“QC”) workflow. When done at the conclusion of the project, or at the end of a certain phase of the project, the measurement becomes part of the quality assurance (“QA”) and defensibility of process efforts.

However, there is an important caveat about statistically measuring error rates of decisions in document review. These human decisions about documents inevitably involve an element of subjectivity, and even the best decision-makers will make mistakes. This is true for all decisions, whether in the original review, a QC review, or in a QA sampling review. Even the “gold standard” decisions made by experienced reviewers, or by senior counsel when doing their quality checks, will not be 100 percent consistent or correct. This element of human error and legitimate differences of opinion always will introduce some degree of “measurement error” into a statistical measurement involving human decision-making. Litigants should therefore keep in mind that statistical measurement of human decisions can never be more accurate than human judgment permits.

That does it for our series on statistical sampling. As we’ve explained, statistics are an essential tool for conducting efficient, effective and accurate discovery. Lawyers who recognize opportunities for using statistics, and then apply them smartly, will improve their discovery efforts and better represent their clients.

Author Details
Senior Vice President, Discovery Strategy & Data Privacy/Security
A recognized thought leader in e-discovery, Maureen collaborates with the company’s clients and operations teams to develop innovative information strategies for legal discovery, compliance, and sensitive data protection. She speaks and writes frequently on significant issues in e-discovery and information governance, and participates actively in the Sedona Conference Working Groups on Electronic Document Retention and Production and Data Privacy and Security. Prior to DiscoverReady, Maureen was a partner at Paul Hastings LLP, where she represented Fortune 100 companies in complex employment litigation matters.
Posted on