Hyper-threading (HT), an Intel technology, allows a single core to leverage unused resources within its architecture. With HT enabled, a single core appears to the operating system as two cores. Typically, this can improve performance and speed. The general consensus in the SQL community seems to be to leave (HT) turned on—unless you see evidence that suggests otherwise. In this post, we’ll review some key considerations to help you understand those “otherwise” circumstances, specifically on a physical SQL box.
Version 1.3 of Relativity Binders—the latest release of our free iPad® app for accessing documents and performing case-related prep on the go—hit the App Store on November 20. The update adds several navigation improvements suggested by users in the field.
We’ve enjoyed hearing feedback from Binders users that are putting the app to work. Their stories are exciting, so we wanted to share a few real-world use cases of how Binders can support e-discovery workflows on the road.
As we mentioned in our recent post about proportionality, the costs of litigation and e-discovery are being scrutinized more than ever, and any way to leverage technology must be considered. Relativity Assisted Review has tremendous potential for expediting a review, but success requires some preparation and thought. Before you jump into an Assisted Review project, be sure to address the following considerations to get the most out of your workflow.
Verify the quality of your text.
The most important data consideration is the text in your data set. Assisted Review uses the text of the documents to determine conceptual relationships and make decisions. Poor or minimal text—which you might find in a data set dominated by drawings or numerical spreadsheets—means Assisted Review probably isn’t the right fit.
In this installment of his interview series, Jay Leib—one of our resident computer-assisted review experts at kCura—interviews Nigel Tabaee of Deloitte Financial Advisory Services LLP’s Discovery practice about the use of computer-assisted review in investigations. Nigel is a senior manager at Deloitte and a member of their national discovery standards committee.
Jay: Can you describe your role at Deloitte?
Nigel: I’m a senior manager in our discovery practice, focused on helping our clients develop document review workflows, which incorporate early data assessments, analytics, and advanced reporting. As a member of our national discovery standards committee, I also help develop leading practices for the Deloitte U.S. firms.
We recently addressed a few transparency issues currently under discussion and deliberation in the computer-assisted review community. Proportionality is often mentioned the in same conversations as these issues.
Simply put, proportionality asks and answers the question of how much money should be spent on a given task. It is an issue that reaches far and wide across the legal industry, touching not only on e-discovery, but also information governance and other subjects.
Brent Ozar is one of approximately 100 Microsoft Certified Masters of SQL Server, and we’ve been fortunate enough to have him helping out our users as a go-to SQL expert since 2011. Last week marked his second time attending Relativity Fest, where he presented two sessions on performance tuning for Relativity’s SQL Servers. Today, he’s sharing some of that insight as our first guest blogger.
Every month, kCura sends me to a different client to help system administrators with everything from making their Relativity SQL Servers faster and more reliable to designing the right indexes for their document table. I, in turn, take what I hear from customers and suggest how kCura can fold that feedback back into Relativity. As you might guess, this has given me a pretty good idea on how to tune the performance of your Relativity SQL Server. Let’s dive in.
Issue coding—which helps flag documents beyond just their responsiveness—is an integral part of the review process. In recent years, technology has moved case teams’ workflows for issue coding a long way from color-coded sticky notes. Today, reviewers can use an issue field in their review platform to record these tags. In marking records this way, teams identify the most relevant documents in a case so they can be easily found throughout the e-discovery process.
When it comes to transparency and quality control, seed documents are becoming a larger part of the conversation around computer-assisted review. Seeds are coded by human reviewers and used as examples to train a computer on a project’s categories. These seeds may be judgmentally selected by a case team or randomly selected via statistical sampling.
It is important to submit strong examples so the engine can best understand each category. Seed documents submitted with an incorrect designation can cause the engine to make incorrect decisions on other documents, potentially harming the outcome of the project. For this reason, if there is any doubt about the category of a document, a user should refrain from submitting it as an example.
The last two decades have yielded an explosion in technology for the business world, and the legal industry has not been immune to its reach. That said, adoption of new technologies hasn’t been immediate, and there has been a noticeable gap between tech-hungry and traditionalist attorneys. But over the last few years, something interesting has happened: that gap has begun to close.
Welcome to Relativity 8. Our newest release includes a lot of new features, and we’re excited about all of them. Over the next few weeks, we’ll highlight these features right here on the blog. Check back often to learn more about what’s new in Relativity 8.
Enhancements to Relativity Analytics include email threading, which can allow users to significantly reduce the number of their emails that need to be reviewed during e-discovery, as well as make necessary reviews more efficient. Here’s a look at what you can do with email threading: