When our team sat down to build the Relativity 7.5 feature set for our processing solution, the goal was to solve a specific problem for our end users—particularly with processing administrators in mind. Even against the backdrop of the wider litigation support industry, these administrators are often faced with tasks that come with limited timelines and big demands.
Although we often focus on workflow tips on this blog, we thought it might be helpful to address a few Relativity compatibility questions we’ve received regarding Internet Explorer 10, as Microsoft released new software and hardware that exclusively uses IE10 back in November.
Jay Leib: What trends are you seeing in the legal industry right now related to computer-assisted review? What’s the temperature of the market?
David Horrigan: The legal industry is absolutely much more receptive than it was a year ago. The feedback we receive indicates that more and more e-discovery requests for proposals are making specific references to computer-assisted review technologies, such as predictive coding. Much of it started in February with judicial acceptance of computer-assisted review in the Da Silva Moore case, which gave many lawyers the confidence to explore the technologies without being accused of trying to use voodoo science. Of course, the underlying technologies have existed outside the legal field for years, and legal acceptance grew during the year in a progression of cases, including Kleen Products, Global Aerospace, and In Re: Actos. I think a really interesting development was the October court hearing in the EORHB case—known by many as the Hooters case—where the court told the parties that, if they didn’t want to use computer-assisted review, they needed to show cause why not. We went a long way in a few months: a court pushing the technology is a big step past merely allowing it.
Big data—all of the electronically stored information being created in the enterprise, both structured and unstructured—has become a term ubiquitous in the media throughout 2012 and continues to be at the top of the enterprise’s mind as we close out this year.
For the past two years, we’ve asked a consulting firm, Strait & Associates, to help us understand how big data has impacted the Relativity universe. How has this trend been affecting our end users? The analyses demonstrated the following.
- Comparing the 50 largest cases housed in Relativity, the median case size grew from 960,000 documents in 2009 to 9.2 million in 2012.
- Comparing the 100 largest cases housed in Relativity, the median case size grew from 520,000 documents in 2010 to 5.9 million in 2012.
Several members of our advice@kCura team are experts in custom development, and they’re excited to see our partners and clients building applications and integrations to extend Relativity’s functionality. To highlight some of the unique ways our users are taking advantage of the platform, we interviewed a few of our partners and clients who have created some more complex applications.
Taking advantage of the Developer Showcase at Relativity Fest 2012, we sat down with Mark Dingle, founder of London-based LitSavant Ltd. Mark—who is also a Relativity Independent Consultant—established LitSavant in 2010, following more than a decade of experience in the litigation support industry. As their flagship product, the LitSavant Conformity Engine is a Relativity Ecosystem application that simplifies the process for designing and implementing custom logic in a Relativity environment. For more information about the Conformity Engine, contact LitSavant directly.
Many clients approach us with questions about Relativity Assisted Review. Depending on their experience with the technology, folks might want to know how it works, find out if their case warrants the use of it, or learn best practices for identifying good example documents, among other questions.
Jay Leib—our chief strategy officer and resident Relativity Assisted Review expert—recently wrote a guest article for Legal Technology Insider that takes a look at what is needed upfront to successfully implement Assisted Review. Specifically, the article touches on seven expectations that should be carefully considered before and during the process. We developed these expectations over time based on the hands-on experience of our clients.
Have you used Relativity Assisted Review? You may have a different name for it, though, as the industry has been using various phrases for computer-assisted review for some time. Assisted Review is the process of amplifying attorneys’ review decisions to suggest coding decisions on all documents in a universe, and then validating the results with statistical analysis.
Last week at Relativity Fest, our clients were able to share their own stories of using Assisted Review in the field, in our Real-World Stories of Relativity sessions. Assisted Review has been used in internal investigations, environmental matters, and, most commonly, litigation. It can be beneficial to reduce review costs for large volumes of documents, ensure all reviewers have the same understanding of the issues, or quickly analyze a case. As a result, Fest was an excellent opportunity for our clients to step into the spotlight and share their experiences.
In this second installment of his interview series, Jay Leib—kCura’s chief strategy officer and resident computer-assisted review expert—talks about information retrieval with Dr. David Grossman, an adjunct professor of computer science at the Illinois Institute of Technology.
Jay Leib: Your background is in information retrieval. So what is the definition of information retrieval and how is it different than other computer science disciplines?
David Grossman: Much of computer science focuses on obtaining the right answer to various problems, quickly and accurately, every time. With information retrieval, on the other hand, it’s much less defined. The search may mean different things to different people, and the “right” answer may be a matter of opinion. Information retrieval is defined as the study of algorithms and heuristics that enable people to find the information they need—and only the information they need—as quickly as possible. When I was a database systems programmer in 1986, search was more definitive, getting the right answers back from a database of values. Since I began more formally working on research problems and publishing papers in information retrieval in 1992, the field has become a more distinct discipline.
Last year, spontaneous conversations at ILTA and Relativity Fest brought to light a shared challenge among our clients: managing the progress of their document review and setting the right expectations for their clients. Often, review timelines are tight and every second counts—and the overall success of a case is dependent on meeting aggressive deadlines under budget. However, gathering real-time insight to make forecasts is a manual and arduous process.
How to Deal with Non-Responsive Documents that Contain Responsive Language
After posting a couple months back about our Reviewer Protocol document—which outlines best practices for identifying good example documents while using computer-assisted review—we received a number of requests for it. Though we realize it’s not the Magna Carta or Detective Comics #27, we hope it can be a helpful reference for anyone conducting a Relativity Assisted Review project. We thought it could be useful to dive into some of its content more deeply, and share some detail on how collaborating with our users has helped us continue to improve the protocol.