August 14th, 2017
As volumes of electronically stored information increase, more and more sensitive data finds its way into ESI collected for legal matters, and it’s becoming increasingly difficult for organizations to effectively protect sensitive data in discovery. We explore some specific measures to protect sensitive data that can be incorporated into e-discovery project workflows. Learn how to find it, cull it, and protect your client's or organizations's sensitive data in e-discovery.
October 11th, 2016
In this post, we focus on a complexity of estimating richness, precision, and recall when searching for sensitive information: sample “depth.” Sample “depth” is the level at which we intend to measure and remediate. Learn about sensitive data in your sampling.
June 10th, 2016
Read our discussion and education for eDiscovery practitioners focused on the context of statistical testing and measurement of techniques used to find documents containing relevant information for discovery—a context in which these statistics are fairly well-settled and easily understood.
March 28th, 2015
In United States v. O’Keefe, former U.S. Magistrate Judge John Facciola tackled the subject of using keyword search terms to help identify relevant documents for production in discovery. Observing that the proper use of search terms in ediscovery involves “the sciences of computer technology, statistics and linguistics,” the Judge offered the now famous quip that, for lawyers and judges to opine on the effectiveness of a given set of search terms “is truly to go where angels fear to tread.”
March 20th, 2014
In two prior posts, I first made the case that all litigators need to understand some basic statistics, and then provided a primer on the key statistical concepts in ediscovery they should know. In this final post in the statistical sampling series, I suggest some of the best opportunities for incorporating statistical sampling and statistical analysis into discovery efforts.