CodeX FutureLaw Takeaways – Legal AI in the Legal System

on Topics: Conferences | Future Law | Legal Tech

CodeX FutureLaw Takeaways – Legal AI in the Legal System

Last month, legal academics, entrepreneurs, practitioners, policymakers, and the UniCourt team descended on Stanford Law School for CodeX FutureLaw: an annual conference exploring the technology shaping the legal profession and the questions it raises. Panels of accomplished legal industry pioneers shared their research and thoughts on topics from legal education to the rise of free law, but one of the most innovative groups explored an issue spanning most every facet of the legal field: The Future of Legal Technology, Civil Procedure, and the Adversarial System.

As one of the panelists noted, legal tech has “tapped a rich vein of anxiety” about the future of the legal profession. Both practitioners and those serving an adjudicatory role fear what the rise of AI will do to their jobs and to the administration of justice. And while these concerns are certainly not unfounded, it is highly unlikely that AI will displace lawyers as key players in the adversarial process anytime soon.

At least, this is the consensus reached by the four panelists, who hailed from academia, the federal judiciary, a high profile civil litigation practice, and civil rights advocacy. The panelists echoed the resounding belief that at this juncture, we are a far cry from general AI, which is capable of making complex human decisions and could, conceivably, replace lawyers as analysts and advocates. However, in the legal profession broadly, we do utilize machine learning, neural networks, and other forms of AI to streamline and change the ways that lawyers, judges, and others in the legal system make decisions. Throughout the course of an engaging and provocative hour, the panelists highlighted the key concerns that these issues have as of yet left unresolved.

The Dangers Posed by Incomplete Data Sets

The panel first identified the perennial problem of sufficient data, in other words, the difficulty of gathering complete data sets needed to properly train AI technology.

Elizabeth Cabraser, a high-profile plaintiff’s attorney, first posed the question: “Where are we going to get the data” to begin to construct reliable predictive models? She cited this as an obstacle to replacing the traditional jury trial with reliable, predictive adjudicatory models. Other panelists echoed the issue in varying contexts, from ruling on cases to forum selection. In each context, the problem of how to not only gather sufficiently representative data, but also to use it effectively, stands in the way of replacing human reasoning with AI.

At UniCourt, this issue largely fuels our drive to provide open access to court data. By supplying unfettered access to bulk data sets, we hope to spur the innovation that arises when legal technologists and entrepreneurs can collect full sets rather than fragmented data blocked by paywalls. Through plugging into our Legal Data APIs, users can access bulk data more easily and affordably – allowing them to create a fuller profile of the legal issue or problem they are addressing. While this does not entirely solve the issue of incomplete data sets, it does at least set a precedent to make data freely accessible, affordable, and attainable – a critical step in the process.

Enshrining Biases in Legal AI

Inextricably connected to the issue of data gathering is the inherent bias problem: Won’t algorithms perpetuate the very same biases we hold as humans?

Panelist Pamela Karlan, a civil rights attorney, raised the issue of California’s efforts to replace the cash bail system with a predictive algorithm that measures a defendant’s flight risk and likelihood of committing a crime while awaiting trial. Civil rights advocates have lauded the algorithm as effacing class distinctions between defendants: traditionally, those who can afford to bond out are economically separated from those who can’t, but the algorithm replaces this traditional procedure.

Nonetheless, others have pointed out the algorithm’s tendency to incorporate entrenched racial and socioeconomic biases, taking into account the very same factors that human decision makers consider like the gender and race of the accused. This is largely because the software compares defendants against those with similar profiles, using data from a system that employs (often discriminatory) benchmarks to effectuate stops, searches, and seizures.

“When we talk about the accuracy of predictions,” Karlan stated, “there’s also this normative question – about whether that’s how the system should operate.” Karlan notes, for example, that while we know we cannot take race into account while setting a sentence, we nonetheless do – and because of this, the data feeding the algorithm results in technology that simply enshrines human biases.

The panel also considered whether there are particular case types better suited for predictive analytics models that also carry minimal danger of incorporating bias or falling short due to incomplete or inaccurate data sets. Judge Lee Rosenthal of the Southern District of Texas opined that less fact-intensive areas of law with repetitive legal claims and clear standards might lend themselves to resolution through predictive analytics.

Some of the potential areas of law where predictive analytics may be able to assist are areas like debt relief litigation, workers’ compensation, and bankruptcy. If predictive analytics can be successfully developed for high volume consumer litigation matters, the benefits of decreasing the time for case resolution and alleviating access to justice issues could be substantial.

The Future of Legal AI

2019 is poised to be a watershed year in the legal technology world, with discussions surrounding the future of law taking root in law schools, law firms, and legal departments nationwide. As key players in the legal field continue to push for change and the role of technology in effectuating it, more and more issues and questions will arise.

At UniCourt, we look forward to building the dialogue on how legal tech will continue to change the legal profession. We’re committed to developing solutions that improve access to legal data, and we’re excited to see what changes and new solutions will emerge between now and CodeX FutureLaw 2020.