Legal Analytics in Litigation Finance – Jeff Cox Writes in Above the Law
Legal analytics power strategic business decisions across a wide array of industries, and we are pleased to share our very own Jeff Cox’s new article, Legal Analytics in Litigation Finance, that was recently published in Above the Law. Jeff’s article starts with a discussion of the underlying data involved in developing legal analytics and how legal tech companies today are compiling massive data sets from state and federal courts and normalizing those data sets with machine learning technology. After illustrating the size and scope of the data gathered by legal tech companies like UniCourt, Jeff then details the specific use cases for how litigation finance firms can leverage legal analytics to learn more about the parties and attorneys involved in claims they may want to fund and reduce underwriting risk in the process.
Here below is an excerpt from the introduction of Jeff’s article:
The debate on the value of legal analytics for litigation finance should not be over whether the underlying data sets are viable; the data is available, and especially for federal analytics. The real debate should center on how and why litigation finance should use legal analytics and court data to continue producing substantial rates of return for institutional investors.
The genesis of this article stems largely from an insightful piece on litigation finance by Above the Law’s very own David Lat, and his observation that “…there is a debate in litigation finance about the utility of analytics, given the nature and size of the data set.” Here, we’ll explore how advances in the legal technology space and improved access to data has made meaningful analytics possible and review examples of how analytics can be leveraged in litigation finance.
Show Me the Data
The facts relating to the underlying data are fundamental to any discussion on the viability of legal analytics in the litigation finance world. For starters, multiple legal tech companies collectively pull millions of data points from federal courts on a recurring basis every day. While on the one hand advocating for improved access to data and removing restrictive paywalls, they’re also spending hundreds of thousands of dollars (if not millions collectively) per year purchasing PACER data, and creating mountains of actionable information for a myriad of use cases, including those in litigation finance.
On top of building these mountains of data, legal tech companies are also applying artificial intelligence (i.e., machine learning) to clean up and better structure their data to produce more meaningful analytics. Without the machine learning technologies like normalization, which is used to clean raw data from PACER, it would not be possible to provide accurate analytics on the real-world entities that matter, such as the actual attorneys, parties, and judges involved in a case.
Imagine you want to know the analytics behind one of the countless attorneys named John Smith. With normalization, legal tech companies can not only ensure you are seeing the analytics for your particular John Smith, but they can also identify situations where the court may have misspelled John’s name, or other instances, such as John’s middle initial being included or omitted from court records. Being able to distinguish between and see the correct analytics for the John Smith or Jane Doe handling a case you want to invest in can be crucial when you’re weighing a potential multi-million-dollar investment.
Noting the sheer volume of data available and the continuing advances being made by legal tech companies to better structure and enhance that data, the open question for the litigation finance industry is what data points from those millions pulled should they start tracking consistently to develop analytics and better position their next investments into successful legal claims.
You can read the full article here on Above the Law.