Mecha Justice: When Machines Think Like Lawyers

The term “Mecha” envisions a futuristic artificial intelligence wrapped in human likeness and seamlessly woven into the activities of society.[1] It represents a time when the aggrandizement of our species will depend on technology that looks and thinks like us.[2] Today, the prototype of attorney mechas are emerging from advances in computer reasoning and big data. The demands of increasingly complex legal transactions, sophisticated consumers, and the momentum of technology are putting pressures on the practice of law that only computer assistance can relieve.

In this age of super-surveillance, multimedia communications and limitless accumulation of data, the information science of law necessitates the investment of cognitive computing. The era of small data is over. One by one human modes of thinking are being supplemented by artificial intelligences that can search and analyze billions of gigabytes of information.[3] At the same time, the ongoing computerization of people is reducing their legal problems and disputes into datasets solvable by finely tuned algorithms.

The human race is being retooled to accommodate the massive data infrastructure we have created and which must in large measure be managed by thinking machines. So the practice of law, the advancement of legal reasoning, indeed, the pursuit of justice, will have to partner with the technology that is remaking our society.

Still, the soul of legal rights resides in human authorship. While machine analytics can discover, calculate and make predictions, the weight of human values is borne by flesh and blood professionals. So it is that the essence of legal work is now the uncanny networking of minds and machines.

This compilation of notable news articles, scientific studies and legal scholarship highlights the progress of rights, responsibilities and roles of legal professionals and thinking machines.

BOOKS AND REPORTS

AI, Robotics, and the Future of Jobs (PEW 2014)
“This report is the latest in a sustained effort throughout 2014 by the Pew Research Center’s Internet Project to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-Lee (The Web at 25). The report covers experts’ views about advances in artificial intelligence (AI) and robotics, and their impact on jobs and employment.”

Complex Operational Decision Making in Networked Systems of Humans and Machines: A Multidisciplinary Approach (NAP 2014)
“Over the last two decades, computers have become omnipresent in daily life. Their increased power and accessibility have enabled the accumulation, organization, and analysis of massive amounts of data. These data, in turn, have been transformed into practical knowledge that can be applied to simple and complex decision making alike. In many of today’s activities, decision making is no longer an exclusively human endeavor. In both virtual and real ways, technology has vastly extended people’s range of movement, speed and access to massive amounts of data. Consequently, the scope of complex decisions that human beings are capable of making has greatly expanded. At the same time, some of these technologies have also complicated the decision making process. The potential for changes to complex decision making is particularly significant now, as advances in software, memory storage and access to large amounts of multimodal data have dramatically increased. Increasingly, our decision making process integrates input from human judgment, computing results and assistance, and networks. Human beings do not have the ability to analyze the vast quantities of computer-generated or -mediated data that are now available. How might humans and computers team up to turn data into reliable (and when necessary, speedy) decisions? Complex Operational Decision Making in Networked Systems of Humans and Machines explores the possibilities for better decision making through collaboration between humans and computers. This study is situated around the essence of decision making; the vast amounts of data that have become available as the basis for complex decision making; and the nature of collaboration that is possible between humans and machines in the process of making complex decisions.”

Future of the Professions: How Technology Will Transform the Work of Human Experts (OUP 2016)
“This book predicts the decline of today’s professions and describes the people and systems that will replace them. In an Internet society, according to Richard Susskind and Daniel Susskind, we will neither need nor want doctors, teachers, accountants, architects, the clergy, consultants, lawyers, and many others, to work as they did in the 20th century. The Future of the Professions explains how ‘increasingly capable systems’ — from telepresence to artificial intelligence — will bring fundamental change in the way that the ‘practical expertise’ of specialists is made available in society.”

SCHOLARLY ARTICLES

Artificial Intelligence: Robots, Avatars, and the Demise of the Human Mediator, 25 Ohio St. J. on Disp. Resol. 105 (2010)
“As technology has advanced, many have wondered whether (or simply when) artificial intelligent devices will replace the humans who perform complex, interactive, interpersonal tasks such as dispute resolution. Has science now progressed to the point that artificial intelligence devices can replace human mediators, arbitrators, dispute resolvers and problem solvers? Can humanoid robots, attractive avatars and other relational agents create the requisite level of trust and elicit the truthful, perhaps intimate or painful, disclosures often necessary to resolve a dispute or solve a problem? This article will explore these questions. Regardless of whether the reader is convinced that the demise of the human mediator or arbitrator is imminent, one cannot deny that artificial intelligence now has the capability to assume many of the responsibilities currently being performed by alternative dispute resolution (ADR) practitioners. It is fascinating (and perhaps unsettling) to realize the complexity and seriousness of tasks currently delegated to avatars and robots. This article will review some of those delegations and suggest how the artificial intelligence developed to complete those assignments may be relevant to dispute resolution and problem solving. “Relational Agents,” which can have a physical presence such as a robot, be embodied in an avatar, or have no detectable form whatsoever and exist only as software, are able to create long term socio-economic relationships with users built on trust, rapport and therapeutic goals. Relational agents are interacting with humans in circumstances that have significant consequences in the physical world. These interactions provide insights as to how robots and avatars can participate productively in dispute resolution processes. Can human mediators and arbitrators be replaced by robots and avatars that not only physically resemble humans, but also act, think, and reason like humans? And to raise a particularly interesting question, can robots, avatars and other relational agents look, move, act, think, and reason even “better” than humans?”

Automatic Justice? Technology, Crime and Social Control, SSRN (2015)
“This paper examines how forensic science and technology are reshaping crime investigation, prosecution and the administration of criminal justice. It illustrates the profound effect of new scientific techniques, data collection devices and mathematical analytical procedures on the traditional criminal justice system. These blur the boundary between the innocent person, the suspect, the accused and the convicted. They also blur the boundary between evidence collection, testing its veracity and probative value, the adjudication of guilt and punishment. The entire process is being automated and temporally and procedurally compressed. At the same time, the start and finish of the criminal justice process are now indefinite and indistinct as a result of the introduction of mass surveillance and the erosion against ‘double jeopardy’ protections caused by scientific advances that make it possible to revisit conclusions reached in the distant past. This, we argue, indicates a move towards a system of ‘automatic justice’ that is mediated by technology in ways that minimise human agency and undercuts the due process safeguards built into the traditional criminal justice model. The paper concludes that in order to re-balance the relationship between state and citizen in an automatic criminal justice system, we may need to accept the limitations of the existing criminal procedure framework and deploy privacy and data protection law which are now highly relevant to criminal justice.”

Big Data and Predictive Reasonable Suspicion, 163 Univ. Penn. L. Rev. 327 (2015)
“The Fourth Amendment requires “reasonable suspicion” to seize a suspect. As a general matter, the suspicion derives from information a police officer observes or knows. It is individualized to a particular person at a particular place. Most reasonable suspicion cases involve police confronting unknown suspects engaged in observable suspicious activities. Essentially, the reasonable suspicion doctrine is based on “small data” – discrete facts involving limited information and little knowledge about the suspect.

But what if this small data is replaced by “big data”? What if police can “know” about the suspect through new networked information sources? Or, what if predictive analytics can forecast who will be the likely troublemakers in a community? The rise of big data technology offers a challenge to the traditional paradigm of Fourth Amendment law. Now, with little effort, most unknown suspects can be “known,” as a web of information can identify and provide extensive personal data about a suspect independent of the officer’s observations. New data sources including law enforcement databases, third party information sources (phone records, rental records, GPS data, video surveillance data, etc.), and predictive analytics, combined with biometric or facial recognition software, means that information about that suspect can be known in a few data searches. At some point, the data (independent of the observation) may become sufficiently individualized and predictive to justify the seizure of a suspect. The question this article poses is can a Fourth Amendment stop be predicated on the aggregation of specific, individualized, but otherwise non-criminal factors?

This article traces the consequences in the shift from a “small data” reasonable suspicion doctrine, focused on specific, observable actions of unknown suspects, to the “big data” reality of an interconnected information rich world of known suspects. With more targeted information, police officers on the streets will have a stronger predictive sense about the likelihood that they are observing criminal activity. This evolution, however, only hints at the promise of big data policing. The next phase will be using existing predictive analytics to target suspects without any actual observation of criminal activity, merely relying on the accumulation of various data points. Unknown suspects will become known, not because of who they are but because of the data they left behind. Using pattern matching techniques through networked databases, individuals will be targeted out of the vast flow of informational data. This new reality subverts reasonable suspicion from being a source of protection against unreasonable stops, to a means of justifying those same stops.”

Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI, 3(7) PLoS ONE e2597 (2008)
“When our PC goes on strike again we tend to curse it as if it were a human being. Why and under which circumstances do we attribute human-like properties to machines? Although humans increasingly interact directly with machines it remains unclear whether humans implicitly attribute intentions to them and, if so, whether such interactions resemble human-human interactions on a neural level. In social cognitive neuroscience the ability to attribute intentions and desires to others is being referred to as having a Theory of Mind (ToM). With the present study we investigated whether an increase of human-likeness of interaction partners modulates the participants’ ToM associated cortical activity.”

Can Robots Be Lawyers? Computers, Lawyers, and the Practice of Law, SSRN (2015)
“We assess frequently-advanced arguments that automation will soon replace much of the work currently performed by lawyers. Our assessment addresses three core weaknesses in the existing literature: (i) a failure to engage with technical details to appreciate the capacities and limits of existing and emerging software; (ii) an absence of data on how lawyers divide their time among various tasks, only some of which can be automated; and (iii) inadequate consideration of whether algorithmic performance of a task conforms to the values, ideals and challenges of the legal profession. Combining a detailed technical analysis with a unique data set on time allocation in large law firms, we estimate that automation has an impact on the demand for lawyers’ time that while measurable, is far less significant than popular accounts suggest. We then argue that the existing literature’s narrow focus on employment effects should be broadened to include the many ways in which computers are changing (as opposed to replacing) the work of lawyers. We show that the relevant evaluative and normative inquiries must begin with the ways in which computers perform various lawyering tasks differently than humans. These differences inform the desirability of automating various aspects of legal practice, while also shedding light on the core values of legal professionalism.”

Computable Contracts, 46 U.C. Davis L. Rev. 629 (2012)
“It is possible to formulate contractual obligations so that computers can ‘understand’ and make prima-facie compliance assessments with specified terms and conditions. Such a contractual obligation, formulated specifically for computer processability, is what this Article terms a ‘computable contract.’ Computable contracts are not merely theoretical, but instead are increasingly being used in economically significant domains. Certain widely used financial contracts exemplify this model. The emergence of computable contracts has largely been unrecognized in the legal literature. However, computable contracting is not extensible across all, or even most, contracting scenarios. Rather, it is limited to a small subset of contracting scenarios involving standardization, and relative legal and factual certainty. Drawing upon computer science research, this Article provides a theoretical account of computable contracting. It first explains how firms can communicate contracting information to computers by representing contracts as data instead of (or in addition to) the traditional written language form. Formalizing contractual obligations in this way is what is termed ‘data-oriented’ contracting. The representation of contractual obligations as data, in turn, allows for novel contracting properties. For example, parties can effectively ‘translate’ certain contractual criteria into a comparable set of computer-processable rules. To make contracts ‘computable’, parties provide computer systems with external data that is relevant to performance. This model is supported by contemporary examples of computable contracts in domains ranging from finance to intellectual property. This Article also provides principles for distinguishing contracting scenarios that are amenable to computability from those that are not.” Article also provides principles for distinguishing contracting scenarios that are amenable to computability from those that are not.”

Creating New Pathways to Justice Using Simple Artificial Intelligence and Online Dispute Resolution, SSRN (2015)
“Access to justice in can be improved significantly through implementation of simple artificial intelligence (AI) based expert systems deployed within a broader online dispute resolution (ODR) framework. Simple expert systems can bridge the ‘implementation gap’ that continues to impede the adoption of AI in the justice domain. This gap can be narrowed further through the design of multi-disciplinary expert systems that address user needs through simple, non-legalistic user interfaces. This article provides a non-technical conceptual description of an expert system designed to enhance access to justice for non-experts. The system’s knowledge base would be populated with expert knowledge from the justice and dispute resolution domains. A conditional logic rule-based system forms the basis of the inference engine located between the knowledge base and a questionnaire-based user interface. The expert system’s functions include problem diagnosis, delivery of customized information, self-help support, triage and streaming into subsequent ODR processes. Its usability is optimized through the engagement of human computer interaction (HCI) and effective computing techniques that engage the social and emotional sides of technology. The conceptual descriptions offered in this article draw support from empirical observations of an innovative project aimed at creating an expert system for an ODR-enabled civil justice tribunal.”

Cyberdelegation and the Administrative State, SSRN (2016)
“This paper explores questions and trade-offs associated with delegating administrative agency decisions to computer algorithms and neural networks, and offers the following preliminary observations to further discussion of the opportunities and risks. First, neither conventional expert systems nor their likely short-term successors will be in a position to resolve (without human intervention) context-specific debates about society’s goals for regulation or administrative adjudication – and these debates are often inherent in the implementation of statutes. Those goals must also inform whether we assign value to aspects of human cognition that contrast with what computers can (presently) accomplish, or what might be conventionally defined as rational in a decision-theoretic sense. Second, society must consider path-dependent consequences and associated cybersecurity risks that could arise from reliance on computers to make and support decisions. Such consequences include the erosion of individual and organizational knowledge over time. Third, it may prove difficult to limit the influence of computer programs even if they are meant to be mere decision support tools rather than the actual means of making a decision. Finally, heavy reliance on computer programs – particularly adaptive ones that modify themselves over time – may further complicate public deliberation about administrative decisions, because few if any observers will be entirely capable of understanding how a given decision was reached.”

Forecasting Domestic Violence: A Machine Learning Approach to Help Inform Arraignment Decisions, 13 J. Empirical Legal Stud. 94 (2016)
“Arguably the most important decision at an arraignment is whether to release an offender until the date of his or her next scheduled court appearance. Under the Bail Reform Act of 1984, threats to public safety can be a key factor in that decision. Implicitly, a forecast of “future dangerousness” is required. In this article, we consider in particular whether usefully accurate forecasts of domestic violence can be obtained. We apply machine learning to data on over 28,000 arraignment cases from a major metropolitan area in which an offender faces domestic violence charges. One of three possible post‐arraignment outcomes is forecasted within two years: (1) a domestic violence arrest associated with a physical injury, (2) a domestic violence arrest not associated with a physical injury, and (3) no arrests for domestic violence. We incorporate asymmetric costs for different kinds of forecasting errors so that very strong statistical evidence is required before an offender is forecasted to be a good risk. When an out‐of‐sample forecast of no post‐arraignment domestic violence arrests within two years is made, it is correct about 90 percent of the time. Under current practice within the jurisdiction studied, approximately 20 percent of those released after an arraignment for domestic violence are arrested within two years for a new domestic violence offense. If magistrates used the methods we have developed and released only offenders forecasted not to be arrested for domestic violence within two years after an arraignment, as few as 10 percent might be arrested. The failure rate could be cut nearly in half. Over a typical 24‐month period in the jurisdiction studied, well over 2,000 post‐arraignment arrests for domestic violence perhaps could be averted.”

Four Futures of Legal Automation, 63 UCLA L. Rev. Disc. 26 (2015)
“Simple legal jobs (such as document coding) are prime candidates for legal automation. More complex tasks cannot be routinized. So far, the debate on the likely scope and intensity of legal automation has focused on the degree to which legal tasks are simple or complex. Just as important to the legal profession, however, is the degree of regulation or deregulation likely in the future. Situations involving conflicting rights, unique fact patterns, and open-ended laws will remain excessively difficult to automate for an extended period of time. Deregulation, however, may effectively strip many persons of their rights, rendering once-hard cases simple. Similarly, disputes that now seem easy, because one party is so clearly in the right, may be rendered hard to automate by new rules that give now disadvantaged parties new rights. By explaining how each of these reversals could arise, this Essay combines technical and sociological analyses of the future of legal automation. We conclude that the future of artificial intelligence in law is more open ended than most commentators suggest.”

Great Disruption: How Machine Intelligence Will Transform the Role of Lawyers in the Delivery of Legal Services, 82 Fordham L. Rev. 3041 (2014)
“This Article argues that machines are coming to disrupt the legal profession and that bar regulation cannot stop them. Part I describes the relentless growth of computer power in hardware, software, and data collection capacity. This Part emphasizes that machine intelligence is not a one-time event that lawyers will have to accommodate. Instead, it is an accelerating force that will invade an ever-larger territory and exercise a more firm dominion over this larger area. We then describe five areas in which machine intelligence will provide services or factors of production currently provided by lawyers: discovery, legal search, document generation, brief generation, and prediction of case outcomes. Superstars and specialists in fast changing areas of the law will prosper — and litigators and counselors will continue to profit — but the future of the journeyman lawyer is insecure. Part II discusses how these developments may create unprecedented competitive pressures in many areas of lawyering. This Part further shows that bar regulation will be unable to stop such competition. The legal ethics rules permit, and indeed where necessary for lawyers to provide competent representation, require lawyers to employ machine intelligence. Even though unauthorized practice of law statutes on their face prohibit nonlawyers’ use of machine intelligence to provide legal services to consumers, these laws have failed, and are likely to continue to fail, to limit the delivery of legal services through machine intelligence. As a result, we expect an age of unparalleled innovation in legal services and reject the view of commentators who worry that bar regulations are a significant stumbling block to technological innovation in legal practice. Indeed, in the long run, the role of machine intelligence in providing legal services will speed the erosion of lawyers’ monopoly on delivering legal services and will advantage consumers and society by making legal services more transparent and affordable.”

Humans and Humans+: Technological Enhancement and Criminal Responsibility, 19 B.U. J. Sci. & Tech. L. 215 (2013)
“This article examines the implications our use of technological enhancements to improve our physical and/or cognitive abilities will necessarily have on the processes of imposing criminal responsibility on those who victimize others. It explains that while our use of such enhancements is still in its infancy, it is more than likely that their use will dramatically accelerate over the next century or less. The articles examine how law has historically approached the concept of a “legal person,” with reference to “normal” humans, “abnormal” humans, animals, objects, supernatural beings and juristic persons. It also reviews how two other authors have analyzed the general legal issues our use of enhancements and other technological advancements are likely to raise. The primary focus of the article, however, is on analyzing how criminal law will need to adapt once our world is populated by two classes of humans: Standard humans (basic Homo sapiens sapiens) and Enhanced humans (Homo sapiens sapiens whose native abilities have been augmented beyond the range of possibilities for their Standard brethren). I [Susan W. Brenner] assume this very basic divergence between humans because it suffices for my analyses, and because I assume that creating a new species or subspecies of Homo sapiens sapiens is likely to be difficult and will therefore not eventuate in the near future.
I use various scenarios, e.g., Standard perpetrator-Enhanced victim, Enhanced-perpetrator and Standard victim, to analyze how criminal law can, and should, adapt to a world in which all humans are not equal. I use statutory rape statutes as an example of law that is designed to protect a distinct and vulnerable class of humans, and speculate as to whether this approach could be extrapolated to Standard humans. I also explore the viability of extrapolating other, similar principles, such as vulnerable victims, into this context. And I briefly analyze the possibility that future law might address this situation by implementing a caste system to “protect” Standard humans from their superior counterparts. My goal is not to predict how future criminal law should deal with human enhancement but to note the likelihood that it will have to do so.”

I Think, Therefore I Invent: Creative Computers and the Future of Patent Law, SSRN (2016)
“An innovation revolution is on the horizon. Artificial intelligence has been generating inventive output for decades, and now the continued and exponential growth in computing power is poised to take creative machines from novelties to major drivers of economic growth. A creative singularity is foreseeable in which computers overtake human inventors as the primary source of new discoveries.
In some cases, a computer’s output constitutes patentable subject matter, and the computer rather than a person meets the requirements for inventorship. Despite this, and despite the fact that the Patent Office has already granted patents for inventions by computers, the issue of computer inventorship has never been explicitly considered by the courts, Congress, or the Patent Office. Yet the issue of whether a computer can be an inventor is an eminently practical one — not only do inventors have ownership rights in a patent, but failure to list an inventor can result in a patent being held invalid or unenforceable.

Drawing on dynamic principles of statutory interpretation and taking analogies from the copyright context, this article argues that creative computers should be considered inventors under the Patent and Copyright Clause of the Constitution. Treating nonhumans as inventors would incentivize the creation of intellectual property by encouraging the development of creative computers. The article proceeds to address a host of challenges that would result from computer inventorship, ranging from ownership of computer-based inventions, to displacement of human inventors, to the need for consumer protection policies.

This analysis applies more broadly to nonhuman creators of intellectual property, and explains why the Copyright Office came to the wrong conclusion with its Human Authorship Requirement. Just as permitting computer inventorship will further promote the progress of science, so too will permitting animal authorship promote the useful arts by creating new incentives for people.

Finally, computer inventorship provides insight into other areas of patent law. For instance, computers could replace the hypothetical skilled person that courts use to judge inventiveness. This would provide justification for raising the bar to patentability and would address one of the most serious criticisms of the patent system — that too many patents of questionable value are issued. Creative computers may require a rethinking of the baseline standard for inventiveness, and potentially of the entire patent system.”

If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal Liability, SSRN (2016)
“The fact that robots, especially self-driving cars, have become part of our daily lives raises novel issues in criminal law. Robots can malfunction and cause serious harm. But as things stand today, they are not suitable recipients of criminal punishment, mainly because they cannot conceive of themselves as morally responsible agents and because they cannot understand the concept of retributive punishment. Humans who produce, program, market and employ robots are subject to criminal liability for intentional crime if they knowingly use a robot to cause harm to others. A person who allows a self-teaching robot to interact with humans can foresee that the robot might get out of control and cause harm. This fact alone may give rise to negligence liability. In light of the overall social benefits associated with the use of many of today’s robots, however, the authors argue in favor of limiting the criminal liability of operators to situations where they neglect to undertake reasonable measures to control the risks emanating from robots.”

Law in the Future, SSRN (2016)
“The set of tasks and activities in which humans are strictly superior to computers is becoming vanishingly small. Machines today are not only performing mechanical or manual tasks once performed by humans, they are also performing thinking tasks, where it was long believed that human judgment was indispensable. From self-driving cars to self-flying planes; and from robots performing surgery on a pig to artificially intelligent personal assistants, so much of what was once unimaginable is now reality. But this is just the beginning of the big data and artificial intelligence revolution. Technology continues to improve at an exponential rate. How will the big data and artificial intelligence revolutions affect law? We hypothesize that the growth of big data, artificial intelligence, and machine learning will have important effects that will fundamentally change the way law is made, learned, followed, and practiced. It will have an impact on all facets of the law, from the production of micro-directives to the way citizens learn of their legal obligations. These changes will present significant challenges to human lawmakers, judges, and lawyers. While we do not attempt to address all these challenges, we offer a short and positive preview of the future of law: a world of self-driving law, of legal singularity, and of the democratization of the law.”

Legal Machines and Legal Act Production within Multisensory Operational Implementations, SSRN (2011)
“The concept of legal machine is elaborated: first, the creation of institutional facts by machines, and, second, multimodal communication of legal content to humans. Examples are traffic lights, vending machines, workflows, etc. Machines can be imposed status-functions of legal actors. Their acts have legal importance and draw legal consequences. Thus the concept of iustitia distributiva and societal distribution is enhanced. The analogy of machines with humans is explored. Legal content, which is communicated by machines, can be perceived by all of our senses and expressed in multimodal languages: textual, visual, acoustic, gestures, aircraft manoeuvres, etc. This paper introduces the concept of encapsulating human into machine. Human-intended actions are communicated to third persons through the machine’s output channel. Encapsulations are compared with deities and mythical creatures, which can send gods’ messages to people through the human mouth.”

Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment, 164 U. Pa. L. Rev. 871 (2016)
“At the conceptual intersection of machine learning and government data collection lie Automated Suspicion Algorithms, or ASAs, algorithms created through the application of machine learning methods to collections of government data with the purpose of identifying individuals likely to be engaged in criminal activity. The novel promise of ASAs is that they can identify data-supported correlations between innocent conduct and criminal activity and help police prevent crime. ASAs present a novel doctrinal challenge, as well, as they intrude on a step of the Fourth Amendment’s individualized suspicion analysis previously the sole province of human actors: the determination of when reasonable suspicion or probable cause can be inferred from established facts. This Article analyzes ASAs under existing Fourth Amendment doctrine for the benefit of courts who will soon be asked to deal with ASAs. In the process, the Article reveals how that doctrine is inadequate to the task of handling these new technologies and proposes extra-judicial means of ensuring that ASAs are accurate and effective.”

Machines Learning Justice: The Case for Judgmental Bootstrapping of Legal Decisions, SSRN (2015)
“”Justice,” the trope goes, “is what the judge ate for breakfast.” The problems of inconsistency in legal decision making are increasingly apparent. Research indicates, for example, that the idiosyncrasies of a judge, the outcome of a football game, the results of the immediately preceding case, and the time of day can substantially affect legal decisions. But while the problem has become clearer, solutions have not. We propose a new tool for reducing inconsistency in legal decision making: Judgmental Bootstrapping Models (“JBMs”) built with machine learning methods. By providing judges with recommendations generated from statistical models of themselves, JBMs can help those judges make more consistent, fairer, and better decisions. They can also help address deficiencies of algorithms currently used to inform legal decisions. To illustrate these advantages, we build a JBM of release decisions for the California Board of Parole Hearings. The JBM correctly classifies 79% of validation-set parole decisions, and if the model would have recommended against parole but the Board nonetheless granted it, the Board was two and a half times more likely to be reversed.”

Predicting the Behavior of the Supreme Court of the United States: A General Approach, SSRN (2014)

“Building upon developments in theoretical and applied machine learning, as well as the efforts of various scholars including Guimera and Sales-Pardo (2011), Ruger et al. (2004), and Martin et al. (2004), we construct a model designed to predict the voting behavior of the Supreme Court of the United States. Using the extremely randomized tree method first proposed in Geurts, et al. (2006), a method similar to the random forest approach developed in Breiman (2001), as well as novel feature engineering, we predict more than sixty years of decisions by the Supreme Court of the United States (1953-2013). Using only data available prior to the date of decision, our model correctly identifies 69.7% of the Court’s overall affirm/reverse decisions and correctly forecasts 70.9% of the votes of individual justices across 7,700 cases and more than 68,000 justice votes. Our performance is consistent with the general level of prediction offered by prior scholars. However, our model is distinctive as it is the first robust, generalized, and fully predictive model of Supreme Court voting behavior offered to date. Our model predicts six decades of behavior of thirty Justices appointed by thirteen Presidents. With a more sound methodological foundation, our results represent a major advance for the science of quantitative legal prediction and portend a range of other potential applications, such as those described in Katz (2013).”

Robot, Esq., SSRN (2013)

“In the not-too-distant future, artificial intelligence systems will have the ability to reduce answering a legal question to the simplicity of performing a search. As transformational as this technology may be, it raises fundamental questions about how we view our legal system, the representation of clients, and the development of our law.
Before considering whether we can develop this technology, we must pause to consider whether we should develop it? Will it actually improve conditions for attorneys, non-attorneys, and the rule of law? There are three important issues inherent in this change. First, what are the ethical implications of this technology to the traditional attorney-client relationship? Second, what are the jurisprudential implications of non-humans making and developing legal arguments? Third, how should we, or not, develop the legal and regulatory regimes to allow systems to engage in the practice of law? This article opens the first chapter in this process, and sets forth an agenda of issues to consider as the intersection between law, technology, and justice merges.”

Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513 (2015)
“Two decades of analysis have produced a rich set of insights as to how the law should apply to the Internet’s peculiar characteristics. But, in the meantime, technology has not stood still. The same public and private institutions that developed the Internet, from the armed forces to search engines, have initiated a significant shift toward robotics and artificial intelligence. This article is the first to examine what the introduction of a new, equally transformative technology means for cyberlaw and policy. Robotics has a different set of essential qualities than the Internet and, accordingly, will raise distinct legal issues. Robotics combines, for the first time, the promiscuity of data with the capacity to do physical harm; robotic systems accomplish tasks in ways that cannot be anticipated in advance; and robots increasingly blur the line between person and instrument. Robotics will prove “exceptional” in the sense of occasioning systematic changes to law, institutions, and the legal academy. But we will not be writing on a clean slate: Many of the core insights and methods of cyberlaw will prove crucial in integrating robotics, and perhaps whatever technology follows.”

Technology and the Guilty Mind: When Do Technology Providers Become Criminal Accomplices?, 105 J. Crim. L. & Criminology 95 (2015)
“The creators of today’s most successful technologies share an important willingness to push the envelope — a drive that propels digital industry forward. This same drive, however, can lead some technology purveyors to push the limits of legality or even become scofflaws in their pursuant of innovation or (more often) profit. The United States must figure out how to harness the important creative force at the heart of the hacker ethic while still deterring destructive criminal wrongdoers. Because it is often courts that must answer this question, it is essential to examine the legal doctrines prosecutors use to sweep up technology providers.
This Article focuses on one type of criminal liability — accomplice liability — that can act as a dragnet on technology that lends itself to criminal use. In particular, a violation of the federal statute for aiding and abetting, 18 U.S.C. § 2, can be implied in every charge for a federal substantive offense, and there is a potentially troubling strain of cases holding that knowing assistance can be enough to deem someone an aider and abettor, even without stronger evidence of a shared criminal purpose.
This Article examines when proprietors of technology with both legal and illegal uses aid and abet their users’ crimes. The aim is to help courts, prosecutors, and technologists draw the line between joining a criminal enterprise and merely providing technology with criminal uses. The Article explains the legal doctrines underlying this type of liability and provides examples of at-risk technologies, including spam software, filesharing services, and anonymity networks like Tor. Ultimately the article concludes that the web of superficially conflicting rulings on the required mental state for aiding and abetting are best harmonized — and future rulings on liability for new technologies best predicted — by looking to the existence of “substantial unoffending uses” for the product or service provided by the technologist accused of aiding and abetting.”

Trial by Machine, 104 Geo. L.J. 1245 (2016)
“This Article explores the rise of “machines” in criminal adjudication. Human witnesses now often give way to gadgets and interpretive software, juries’ complex judgments about moral blameworthiness give way to mechanical proxies for criminality, and judges’ complex judgments give way to sentencing guidelines and actuarial instruments. Although mechanization holds much promise for enhancing objectivity and accuracy in criminal justice, that promise remains unrealized because of the uneven, unsystematic manner in which mechanized justice has been developed and deployed. The current landscape of mechanized proof, liability, and punishment suffers from predictable but underscrutinized automation pathologies: hidden subjectivities and errors in “black box” processes; distorted decision-making through oversimplified — and often dramatically inaccurate — proxies for blameworthiness; the compromise of values protected by human safety valves, such as dignity, equity, and mercy; and even too little mechanization where machines might be a powerful debiasing tool but where little political incentive exists for its development or deployment. For example, the state promotes the objectivity of interpretive DNA software that typically renders match statistics more inculpatory, but lionizes the subjective human judgment of its fingerprint and toolmark analysts, whose grandiose claims of identity might be diluted by such software. Likewise, the state attacks the polygraph as an unreliable lie detector at trial, where results are typically offered only by defendants, but routinely wields them in probation revocation hearings, capitalizing in that context on their cultural status as “truth machines.” The Article ultimately proposes a systems approach – “trial by cyborg” – that safeguards against automation pathologies while interrogating conspicuous absences in mechanization through “equitable surveillance” and other means.”

Using Algorithmic Attribution Techniques to Determine Authorship in Unsigned Judicial Opinions, 16 Stan. Tech. L. Rev. 503 (2013)
“This Article proposes a novel and provocative analysis of judicial opinions that are published without indicating individual authorship. Our approach provides an unbiased, quantitative, and computer scientific answer to a problem that has long plagued legal commentators. United States courts publish a shocking number of judicial opinions without divulging the author. Per curiam opinions, as traditionally and popularly conceived, are a means of quickly deciding uncontroversial cases in which all judges or justices are in agreement. Today, however, unattributed per curiam opinions often dispose of highly controversial issues, frequently over significant disagreement within the court. Obscuring authorship removes the sense of accountability for each decision’s outcome and the reasoning that led to it. Anonymity also makes it more difficult for scholars, historians, practitioners, political commentators, and—in the thirty-nine states with elected judges and justices—the electorate, to glean valuable information about legal decision-makers and the way they make their decisions. The value of determining authorship for unsigned opinions has long been recognized but, until now, the methods of doing so have been cumbersome, imprecise, and altogether unsatisfactory. Our work uses natural language processing to predict authorship of judicial opinions that are unsigned or whose attribution is disputed. Using a dataset of Supreme Court opinions with known authorship, we identify key words and phrases that can, to a high degree of accuracy, predict authorship. Thus, our method makes accessible an important class of cases heretofore inaccessible. For illustrative purposes, we explain our process as applied to the Obamacare decision, in which the authorship of a joint dissent was subject to significant popular speculation. We conclude with a chart predicting the author of every unsigned per curiam opinion during the Roberts Court.”

NEWS ARTICLES

A 19-Year-Old Made a Free Robot Lawyer That Has Appealed $3 Million in Parking Tickets, Business Insider, Feb. 18, 2016
“But with the help of a robot made by British programmer Joshua Browder, 19, it costs nothing. Browder’s bot handles questions about parking-ticket appeals in the UK. Since launching in late 2015, it has successfully appealed $3 million worth of tickets.”

AI Pioneer ROSS Intelligence Lands Its First Big Law Clients, Am. Law., May 6, 2016
“In the latest sign that the use of artificial intelligence may eventually become common in Big Law, Baker & Hostetler has emerged as the first law firm to make public that it has licensed the artificial intelligence product developed by ROSS Intelligence for bankruptcy matters. Marketed as “the world’s first artificially intelligent attorney,” ROSS Intelligence uses International Business Machine’s Watson technology to allow users to ask natural language questions and get answers. The process not only constantly monitors the law, but more importantly uses “machine learning” capabilities to continuously improve its search results.”

Algorithm Writers Need a Code of Conduct, The Guardian, Dec. 6, 2014
“Without us noticing it, therefore, a new kind of power – algorithmic power – has arrived in our societies. And for most citizens, these algorithms are black boxes – their inner logic is opaque to us. But they have values and priorities embedded in them, and those values are likewise opaque to us: we cannot interrogate them.
This poses two questions. First of all, who has legal responsibility for the decisions made by algorithms? The company that runs the services that are enabled by them? Maybe – depending on how smart their lawyers are.
But what about the programmers who wrote the code? Don’t they also have some responsibilities? Pasquale reports that some micro-targeting algorithms (the programs that decide what is shown in your browser screen, such as advertising) categorise web users into categories which include “probably bipolar”, “daughter killed in car crash”, “rape victim”, and “gullible elderly”. A programmer wrote that code. Did he (for it was almost certainly a male) not have some ethical qualms about his handiwork?”

Algorithms of Our Lives, Chronicle of Higher Educ., Dec. 16, 2013

“Software has become a universal language, the interface to our imagination and the world. What electricity and the combustion engine were to the early 20th century, software is to the early 21st century. I [Lev Manovich] think of it as a layer that permeates contemporary societies. If we want to understand today’s techniques of communication, representation, simulation, analysis, decision making, memory, vision, writing, and interaction, we must understand software.
But while scholars and media and new-media theorists have covered all aspects of the IT revolution, creating fields like cyberculture studies, Internet studies, game studies, new-media theory, and the digital humanities, they have paid comparatively little attention to software, the engine that drives almost all they study.”

alt.legal: Can Computers Beat Humans at Law?, Above the Law, Mar. 23, 2016
“Will AI reach a point where it can beat a lawyer at the practice of law? Just as Lee Sedol battled AlphaGo, will legal robots of the future battle attorneys, seeking to supplant them? I offer an investigation into this for my second post in the series on AI and the law (part 1 is here).”

Anticipating Artificial Intelligence, Nature, Apr. 26, 2016
“As AI converges with progress in robotics, cloud computing and precision manufacturing, tipping points will arise at which significant technological changes are likely to occur very quickly. Crucially, advances in robot vision and hearing, combined with AI, are allowing robots to better perceive their environments. This could lead to an explosion of intelligent robot applications — including those in which robots will work closely with humans.”

Are the Robots About to Rise? Google’s New Director of Engineering Thinks So…, The Guardian, Feb. 22, 2014
“Google has bought almost every machine-learning and robotics company it can find, or at least, rates. It made headlines two months ago, when it bought Boston Dynamics, the firm that produces spectacular, terrifyingly life-like military robots, for an “undisclosed” but undoubtedly massive sum. It spent $3.2bn (£1.9bn) on smart thermostat maker Nest Labs. And this month, it bought the secretive and cutting-edge British artificial intelligence startup DeepMind for £242m.”

Automating Legal Advice: AI and Expert Systems, Prism Legal, Jan. 27, 2016
“In 2015, we regularly read about automating legal advice with artificial intelligence (AI), especially with IBM Watson. In my view, the AI smoke – at least as described in many reports – exceeds the fire. In fact, I [Ron Friedmann] cannot name a single large law firm that has deployed a Watson system. Yet, as I explain below, expert systems, a branch of AI, are actually being deployed. Several firms have licensed expert systems to automate advice or intake. I start with some background on AI before turning to recent expert system developments.”

Bots, Big Data, Blockchain, and AI – Disruption or Incremental Change?, Legal Prism, June 22, 2016
“The legal media has lately had a mania for tech headlines. Many commentators claim that tech, especially artificial intelligence (AI), will do something to Big Law. I [Ron Friedmann] disagree. Tech more likely will do something in it: incremental change. I start with the case against disruption, then look at four headline-grabbing technologies: AI, Bots, Big Data, and Blockchain.”

Cathedral of Computation, The Atlantic, Jan. 15, 2015
“Algorithms are everywhere, supposedly. We are living in an “algorithmic culture,” to use the author and communication scholar Ted Striphas’s name for it. Google’s search algorithms determine how we access information. Facebook’s News Feed algorithms determine how we socialize. Netflix’s and Amazon’s collaborative filtering algorithms choose products and media for us. You hear it everywhere. “Google announced a change to its algorithm,” a journalist reports. “We live in a world run by algorithms,” a TED talk exhorts. “Algorithms rule the world,” a news report threatens. Another upgrades rule to dominion: “The 10 Algorithms that Dominate Our World.”

Computer Model Uses Scotus Arguments to Predict Outcomes; Is It Smarter Than the Fourth Estate?, ABA J., Apr. 29, 2015
“A machine-learning computer model called CourtCast predicts U.S. Supreme Court decisions with a 70 percent accuracy rate, according to its creator, data science expert Chris Nasrallah. Is it a big deal? Maybe not, given that the petitioner has won 68 percent of the cases argued during the time John G. Roberts Jr. has been chief justice, the FiveThirtyEight blog reports.”

Computer vs. Lawyer? Many Firm Leaders Expect Computers to Win, Oct. 24, 2015
“In a large-scale survey released this month, 35 percent of law firm leaders said they could envision replacing first-year associates with law-focused computer intelligence within the next five to 10 years. That’s up from less than a quarter of respondents who gave the same answer in 2011.”

Conscription of Apple’s Software Engineers, The Atlantic, Feb. 18, 2016
“On Tuesday, a federal judge ordered Apple to write malware to load onto the dead terrorist’s phone, so that the FBI can keep guessing new codes electronically, forcing entry without causing the device to delete all the data that it contains.” [4]

Criminal Justice Data: California’s Attorney General Releases New Version of OpenJustice Dashboard, Lib. J., Feb. 18, 2016
“The OpenJustice v1.1 rollout includes new features focused on allowing Californians to better understand how the criminal justice system is working in their specific communities. Now at a city, county, and state level, the OpenJustice Dashboard shows crime, clearance, and arrest rates, as well as arrest-related deaths, deaths in custody, and law enforcement officers killed or assaulted. Because public safety is also impacted by many societal factors outside of law enforcement, the Dashboard incorporates important contextual data such as population and demographic information, unemployment rates, poverty rates, and educational attainment levels.”

Digital Smarts Everywhere: The Emergence of Ambient Intelligence, LLRX, May 21, 2016
“Upon reading a fascinating recent article on TechCrunch.com entitled The Next Stop on the Road to Revolution is Ambient Intelligence, by Gary Grossman, on May 7, 2016, you will find a compelling (but not too rocking) analysis about how the rapidly expanding universe of digital intelligent systems wired into our daily routines is becoming more ubiquitous, unavoidable and ambient each day. All around indeed. Just as romance can dramatically affect our actions and perspectives, studies now likewise indicate that the relentless global spread of smarter – – and soon thereafter still smarter – – technologies is comparably affecting people’s lives at many different levels. We have followed just a sampling of developments and trends in the related technologies of artificial intelligence, machine learning, expert systems and swarm intelligence in these 15 Subway Fold posts. I [Alan Rothman] believe this new article, adding “ambient intelligence” to the mix, provides a timely opportunity to bring these related domains closer together in terms of their common goals, implementations and benefits. I highly recommend reading Mr. Grossman’s piece it in its entirety.”

End of Lawyers? Not So Fast., NY Times, Jan. 4, 2016, at B4
“The fate of lawyers has been seen as a harbinger of a broader wave of worker displacement. The rapid commercialization of a new generation of artificial intelligence-derived technologies has led to concerns that technological disruption will extend from white- and blue-collar occupations of largely routine work that can be automated, into highly-paid professions like legal workers and doctors.”

Ghost Ships, Economist, Mar. 8, 2014
“Ships, like aircraft and cars, are increasingly controlled by electronic systems, which makes automation easier. The bridges of some modern vessels are now more likely to contain computer screens and joysticks than engine telegraphs and a giant ship’s wheel. The latest supply ships serving the offshore oil and gas industry in the North Sea, for instance, use dynamic positioning systems which collect data from satellites, gyrocompasses, and wind and motion sensors to automatically hold their position when transferring cargo (also done by remote control) to and from platforms, even in the heaviest of swells.
However, as is also the case with pilotless aircraft and driverless cars, it is not so much a technological challenge that has to be overcome before autonomous ships can set sail, but regulatory and safety concerns. As in the air and on the road, robust control systems will be needed to conform to existing regulations.”

Google Glass Is Already Causing Legal Experts to See Problems, ABA J., Apr. 1, 2014
“[D]riving violations are only one example of the effects that Google Glass could have on the legal system. Like other new technologies, the device is poised to influence a wide swath of legal issues, including copyright infringement and privacy expectations.”

Here Come the Robot Lawyers, CNN Money, Mar. 28, 2014
“The law profession is being reshaped by new automation technologies that allow law firms to complete legal work in a fraction of the time and with far less manpower. Think IBM’s “Jeopardy!”-winning computer Watson — practicing law. “Watson the lawyer is coming,” said Ralph Losey, a legal technology expert at the law firm Jackson Lewis. “He won’t come up with the creative solutions, but when it comes to the regular games that lawyers play, he’ll kill them.””

How Artificial Intelligence Is Transforming the Legal Profession, ABA J., Apr. 1, 2016
“Artificial intelligence is changing the way lawyers think, the way they do business and the way they interact with clients. Artificial intelligence is more than legal technology. It is the next great hope that will revolutionize the legal profession. Change can be brought on through pushing existing ideas. What makes artificial intelligence stand out is the potential for a paradigm shift in how legal work is done.AI, sometimes referred to as cognitive computing, refers to computers learning how to complete tasks traditionally done by humans. The focus is on computers looking for patterns in data, carrying out tests to evaluate the data and finding results. Chicago-based NexLP, which stands for next generation language processing, is creating new ways for lawyers to look at data.”

Is Artificial Intelligence the Key to Unlocking Innovation in Your Law Firm?, Legal Week, Nov. 12, 2015
“The recent media frenzy about artificial intelligence (AI) has been unavoidable. This vision has perhaps come a step closer with the arrival of IBM Watson and Richard Susskind’s latest book, The Future of the Professions, which predicts an internet society with greater virtual interaction with professional services such as doctors, teachers, accountants, architects and lawyers. In reality, is AI many years away from making any real impact in the legal sector? And should law firms see this technical advancement as an opportunity or threat?”

Law Prof Ponders: If a Highly Advanced Robot Kills, Is It Murder or Product Liability?, ABA J. Podcast, Apr. 26, 2016
“We aren’t there yet. But a human-like robot portrayed in the short story “Mika Model” could eventually be developed. And, if so, the central question of the story also could come to life, says law professor Ryan Calo of the University of Washington. That is, how should the legal system treat a flesh-and-blood robot with a computer for a brain who kills her owner after claimed abuse, then pleads for a criminal defense lawyer? Should Mika be criminally charged? Put in a holding cell while prosecutors figure out what to do? Or is this a product liability issue? ”

Laws of Adaptation, Harv. L. Today, Fall 2015
“Disruptive innovation, a term coined by Clayton Christensen, a professor at Harvard Business School, occurs when existing patterns of work and organization are radically transformed in a relatively short period of time, when new competitors arrive to offer low-cost alternatives at the bottom end of the market. The incumbents ignore these upstarts—until the disruptors become the norm and the old guard adapts or is replaced. Personal computers replacing mainframes, cellphones replacing landlines, retail medical clinics replacing traditional doctors’ offices, and Uber replacing taxis are important examples, Wilkins says. The legal market—which has maintained some of the highest profit margins for professional service businesses—faces the same challenge. Legal information is being digitized, and low-level tasks are being outsourced. Now the inspiration aspect of legal work—the solving of complex problems—could soon be facing competition from sophisticated computers. Meanwhile, consumers are turning eagerly to low-priced alternatives to traditional lawyering, such as online divorces and wills, and new online matchmaking services through which lawyers can compete for clients—like Uber, but for law.”

Legal Aid with a Digital Twist, NY Times, June 1, 2016
“The solution [for overmatched unrepresented litigants] is to establish a right to counsel in the civil cases where the most is at stake. Many state bar associations support a civil right to counsel, and 18 states are considering laws to guarantee a lawyer in certain civil cases. But until that happens — and we may wait a long time — it makes sense to take a harm-reduction approach and help the self-represented do the best they can. One way is with online forms and apps.”

Let Artificial Intelligence Evolve, Slate, Apr. 18, 2016
“Until an A.I. has feelings, it’s going to be unable to want to do anything at all, let alone act counter to humanity’s interests and fight off human resistance. Wanting is essential to any kind of independent action. And the minute an A.I. wants anything, it will live in a universe with rewards and punishments—including punishments from us for behaving badly. In order to survive in a world dominated by humans, a nascent A.I. will have to develop a humanlike moral sense that certain things are right and others are wrong. By the time it’s in a position to imagine tiling the Earth with solar panels, it’ll know that it would be morally wrong to do so.”

Machines v. Lawyers, City Journal, Spr. 2014
“The growing role of machine intelligence will create new competition in the legal profession and reduce the incomes of many lawyers. The job category that the Bureau of Labor Statistics calls “other legal services”—which includes the use of technology to help perform legal tasks—has already been surging, over 7 percent per year from 1999 to 2010. As a consequence, the law-school crisis will deepen, forcing some schools to close and others to reduce tuitions. While lawyers and law professors may mourn the loss of more lucrative professional opportunities, consumers of modest means will enjoy access to previously cost-prohibitive services.”

Meet Ross, the World’s First Robot Lawyer, Fortune, May 12, 2016
“Global law firm Baker & Hostetler, one of the nation’s largest, recently announced that it has hired a robot lawyer created by ROSS Intelligence, Futurism reports. Ross will be employed in the law firm’s bankruptcy practice which currently employs close to 50 lawyers. Ross was built on IBM’s Watson. It can understand your questions, and respond with a hypothesis backed by references and citations. It improves on legal research by providing you with only the most highly relevant answers rather than thousands of results you would need to sift through. Additionally, it is constantly monitoring current litigation so that it can notify you about recent court decisions that may affect your case, and it will continue to learn from experience, gaining more knowledge and operating more quickly, the more you interact with it.”

More Nuanced View of Legal Automation, Concurring Opinions, June 27, 2014
“A Guardian writer has updated Farhad Manjoo’s classic report, “Will a Robot Steal Your Job?” Of course, lawyers are in the crosshairs. As Julius Stone noted in The Legal System and Lawyers’ Reasoning, scholars have addressed the automation of legal processes since at least the 1960s. Al Gore now says that a “new algorithm . . . makes it possible for one first year lawyer to do the same amount of legal research that used to require 500.” But when one actually reads the studies trumpeted by the prophets of disruption, a more nuanced perspective emerges.”

New Chips Are Using Deep Learning to Enhance Mobile, Camera and Auto Image Processing Capabilities, LLRX, June 6, 2015
“Alan Rothman takes a look at the expanding experience of how we interface with our devices’ screens for inputs and outputs nearly all day and every day. He explains how what many of the gadgets will soon be able to display and, moreover, understand about digital imagery is about to take a significant leap forward. This is a result of the pending arrival of new chips embedded into their circuitry that are enabled by artificial intelligence (AI) algorithms.”

New Law School Courses Take on Robots, Videogames and Piketty-Mania, Wall St. J., June 24, 2014
“Robots are coming to Georgetown University, which is offering its law students a seminar on the regulation of “autonomous agents.” And at Pepperdine University law school, videogames aren’t just a procrastinating student’s best friend but the subject of a new course. These are among the more unconventional course offerings making their debut this fall semester, as law schools continue to expand their curricula beyond the world of torts, contracts and criminal procedure.”

New Way to Look at Law, With Data Viz and Machine Learning, Wired, June 11, 2014
“As its creators see it, Ravel’s visual search offers myriad improvements over the old columns of text results. It better lets you see how cases evolved over time, and potentially lets you see outliers that could be useful in crafting an argument–cases that would languish at the bottom of a more traditional search. The visualization, Reed insists, “tells a lot more of the story of law than the rank ordered list.” (That might be true. When they first showed their visual search to a veteran judge, he looked at the complex map of circles and responded: “This is how my brain works!”)”

Revealed: Divorce Software Error Hits Thousands of Settlements, Guardian, Dec. 17, 2015
“Thousands of couples who have settled their divorces in the last 20 months may have to re-open negotiations because a critical fault has been found in software used to calculate financial terms.”

Robots Could Make the Supreme Court More Transparent, The Atlantic, Jan. 20, 2016
“Li and his colleagues—Pablo Azar, David Larochelle, Phil Hill, James Cox, Robert Berwick, and Andrew Lo—built an algorithm designed to determine which justice wrote unsigned opinions. (Or which justice’s clerks, as is often the case.) Their work began in 2012, amid rumors that John Roberts, the chief justice, had changed his mind at the last minute about the Affordable Care Act—a move that apparently meant he ended up writing most of the majority opinion after having already written much of the dissent. Li and his colleagues wanted to find out if that theory might be true.”

Seeing the Possibilities of Automation, Tim Hwang Is Working Toward the Death of Practice-As-Usual, ABA J., Sept. 9, 2014
“Tim Hwang went to law school and became a lawyer for one reason: To kill the legal profession as we know it. Fascinated by the possibilities of using technology and automation to create efficiencies that were not present in the legal market, Hwang figured the best way for him to test his theories would be to become a lawyer. “I saw myself as being undercover at law school,” says Hwang, who graduated from the University of California at Berkeley’s Boalt Hall in 2013. He then joined Davis Polk & Wardwell, but unlike many of his fellow first-year associates, he wasn’t looking to make partner. Indeed, he didn’t even make it to his yearly evaluation. All he wanted to do was test out some software he had developed to do much of his work for him.”

Stanford Engineers’ ‘Law, Order & Algorithms’ Data Project Aims to Identify Bias in the Criminal Justice System, Stanford News, Feb. 10, 2016
“To provide an unbiased, data-driven analysis of such issues, researchers at Stanford University’s School of Engineering have launched what they call the Project on Law, Order & Algorithms. The project is led by computational social scientist Sharad Goel, an assistant professor of management science and engineering. He also teaches a course at Stanford Engineering that explores the intersection of data science and public policy issues revolving around policing.”

UNC, MIT Study Probes AI Threats to Big Law, Am. Law., Jan. 5, 2016 |
“Recent advances in artificial intelligence have led many law firm leaders to conclude that computers will replace more and more junior lawyers over the coming decades, with employment gradually hollowing out from the bottom up. Such predictions may be off base, if a new academic paper, “Can Robots Be Lawyers?,” is correct. The draft paper, authored by Dana Remus, a professor currently on leave from the University of North Carolina School of Law, and Frank Levy, an urban economics professor emeritus at Massachusetts Institute of Technology, examines specific ways that automation might or might not be applied to a range of legal tasks. The study finds that while many tasks may be automated, most legal work is too complex—and too important—for even the most advanced machines to learn and replicate.”

Wachtell Way of E-Discovery, Am. Law., Feb. 1, 2016
“She [Maura Grossman] and [Gordon] Cormack developed a process they call continuous active learning, in which a computer uses machine learning techniques to get better at identifying the right documents. Simply put, machine learning uses computer algorithms to organize information by analyzing features in data. By showing the machine relevant documents, a person can train the machine to identify others that fit the pattern. Grossman and Cormack got three related patents on the process, they have other patents pending and they’ve applied for a trademark on the term “continuous active learning.””

Why Embracing Artificial Intelligence Is in Your Law Practice’s Best Interests (Podcast), ABA J., Mar. 28, 2016
“Artificial intelligence has long been a tool for lawyers to perform their tasks more efficiently. However, the technology has advanced to the point where computers can now perform many of the tasks that were once the exclusive domain of humans. In this month’s Asked and Answered, the ABA Journal’s Victor Li talks to freelance writer Julie Sobowale about how artificial intelligence is revolutionizing the practice of law.”

Will Technology Create a Lawyer ‘Jobs-Pocalypse’? Naysayers Overstate Impact, Study Says, ABA J., Jan. 5, 2016
“Automation is having an impact on the job market for lawyers, but the future isn’t as dire as some headlines predict, according to a new study. The researchers analyzed law-firm billing data for the year 2014 provided by an analytics company and came up with this “ballpark estimate”: Lawyer employment would drop slightly more than 13 percent if automation is applied to law practice.”

CONFERENCES AND PROCEEDINGS

Berkman Klein Center Events (Harvard University)
“Through discussions, lectures, conferences, and other gatherings, the Berkman Klein Center convenes diverse groups around a wide range of topics related to the Internet as a social and political space. The unique interactions generated through these events – both as process and product – are fundamental elements of the Berkman Klein Center’s modus operandi. We encourage you to join us at the events listed below to learn, engage, and connect with our community.”

CodeX FutureLaw Conference 2016 (Stanford University)
“On May 20, 2016, CodeX – the Stanford Center for Legal Informatics will host the CodeX FutureLaw 2016, CodeX’s fourth annual conference focusing on how technology is changing the landscape of the legal profession, the law itself, and how these changes impact us all. CodeX FutureLaw 2016 will bring together the academics, entrepreneurs, lawyers, investors, policy makers, and engineers spearheading the tech-driven transformation of our legal system.”

Computers Gone Wild: Impact and Implications of Developments in Artificial Intelligence on Society (Berkman Klein Center at Harvard University)
“The second “Computers Gone Wild: Impact and Implications of Developments in Artificial Intelligence on Society” workshop took place on February 19, 2016 at Harvard Law School. Marin Soljačić, Max Tegmark, Bruce Schneier, and Jonathan Zittrain convened this informal workshop to discuss recent advancements in artificial intelligence research. Participants represented a wide range of expertise and perspectives and discussed four main topics during the day-long event: the impact of artificial intelligence on labor and economics, algorithmic decision-making, particularly in law, autonomous weapons, and the risks of emergent human-level artificial intelligence. Each session opened with a brief overview of the existing literature related to the topic from a designated participant, followed by remarks from two or three provocateurs. The session leader then moderated a discussion with the larger group. At the conclusion of each session, participants agreed upon a list of research questions that require further investigation by the community. A summary of each discussion as well as the group’s recommendations for additional areas of study are included below.”

Human Enhancement and the Law: Regulating for the Future Conference (University of Oxford)
“This conference and resulting special edition of the Journal of Law, Information and Science, will aim to identify the legal issues that arise as a result of these developments in human enhancement technologies. Paper presentations and panel sessions will be directed at exploring the ways in which legal systems — both within jurisdictions and across borders — can and should respond to these issues. Our hope is that in bringing together scholars from a range of relevant disciplines — law, philosophy, politics, sociology and the sciences — the conference will facilitate the development of some answers to the thorny legal challenges enhancement technologies pose.”

Injecting Rationalism into the Artificial Intelligence Discussion, Bloomberg Law, Feb. 11, 2016
“Ever since IBM’s Watson beat two jeopardy champs in 2011, there have been voices predicting that artificial intelligence will displace human lawyers. To bring some light to this discussion, Vanderbilt Law School is hosting “Watson, Esq.: Will Your Next Lawyer Be a Machine,” billed as the first legal conference on the topic and scheduled to take place on April 13th and 14th.”

International Conference on Artificial Intelligence and Law (ICAIL)
“The ICAIL conference is the primary international conference addressing research in Artificial Intelligence and Law, and has been organized biennially since 1987 under the auspices of the International Association for Artificial Intelligence and Law (IAAIL). ICAIL provides a forum for the presentation and discussion of the latest research results and practical applications; it fosters interdisciplinary and international collaboration. The conference proceedings are published by ACM.”

International Legal Technology Association Conference (ILTACON)
“ILTACON is a four-day educational conference that draws on the experience and success of professionals employing ever-changing technology within law firms and legal departments. Technology and the business of law are rapidly changing, and we are all change agents. ILTACON is where we share and learn about what’s ahead and how to succeed in driving and embracing change within our teams, our organizations and the industry. All educational content is developed by a conference committee of 40+ peers.”

Is Harm to a Prosthetic Limb Property Damage or Personal Injury?, Motherboard, Jan. 26, 2016
“According to the law, you and your cell phone are two separate entities. No matter how reliant you might feel on the small, glowing rectangle in your pocket, the distinction is clear: you are a person and your phone is your property. In the same way, the law also sees a separation between a person who is using a prosthetic (such as a bionic limb) and the device itself. But as new types of prosthetics become available, and the integrations between man and machine become more intimate, the traditional distinctions the law makes are being questioned. This occurred most recently at the University of Oxford’s Human Enhancement and the Law: Regulating for the Future Conference, which explored the legal issues that might arise as a result of developments in human enhancement technologies.”

Legaltech (ALM)
“Legaltech is the Most Important Legal Technology Event of the Year! The show takes place twice a year – in New York City in February and in San Francisco in June. Featuring CLE accredited educational tracks, exhibits, and networking opportunities. Produced by ALM, the publisher of Legaltech News, The American Lawyer, Corporate Counsel, The New York Law Journal, The National Law Journal, The Recorder and more.”

Predicting the Supreme Court Using Artificial Intelligence, Concurring Opinions, Oct. 23, 2014
“Is it possible to predict the outcomes of legal cases – such as Supreme Court decisions – using Artificial Intelligence (AI)? I [Harry Surden] recently had the opportunity to consider this point at a talk that I gave entitled “Machine Learning Within Law” at Stanford. At that talk, I discussed a very interesting new paper entitled “Predicting the Behavior of the Supreme Court of the United States” by Prof. Dan Katz (Mich. State Law), Data Scientist Michael Bommarito, and Prof. Josh Blackman (South Texas Law). Katz, Bommarito, and Blackman used machine-learning AI techniques to build a computer model capable of predicting the outcomes of arbitrary Supreme Court cases with an accuracy of about 70% – a strong result. This post will discuss their approach and why it was an improvement over prior research in this area.”

Thomson Reuters Unveils New Platforms for E-Discovery and Legal Research and Offers Glimpse of New AI Product Using IBM’s Watson, Law Sites, Jan. 28, 2016
“At what was billed as an Innovation Summit in its Times Square headquarters yesterday, Thomson Reuters Legal briefed journalists, bloggers and analysts on two products it is unveiling at Legaltech New York next week and also offered tantalizing hints of a product it is developing using the cognitive computing power of IBM’s Watson, the computer that once won Jeopardy!. The two products it announced were eDiscovery Point, which TR executives positioned as a market-changing platform that is faster and easier to use than other e-discovery products on the market, and Practice Point, a product that straddles two other TR products, Practical Law and Westlaw, and strives to deliver the content from both that is most relevant to a given task or legal issue. There are versions of Practice Point for both law firms and in-house counsel.”

Watson, Esq.: Will Your Next Lawyer Be a Machine? (Vanderbilt Law School Apr. 14, 2016)
“At this conference you will have the opportunity to engage with and learn from experts in the field of artificial intelligence, as well as notable law firm leaders and CIOs, corporate in-house counsel, legal ethicists, and technology providers from around the world.”

We Robot (University of Miami School of Law)
“Robotics seems increasingly likely to become a transformative technology. This conference will build on existing scholarship exploring the role of robotics to examine how the increasing sophistication of robots and their widespread deployment everywhere from the home, to hospitals, to public spaces, and even to the battlefield disrupts existing legal regimes or requires rethinking of various policy issues.” See also We Robot Bibliography.

Will Powerful Technology Replace Lawyers?, Bloomberg Law, May 1, 2015
“Lawyers could become legal concierges and should adopt computational law for their practices and stay current with technology, or else risk fading away, according to speakers at the CodeX FutureLaw Conference on April 30 at Stanford Law School. Oliver Goodenough, a University of Vermont law professor, told the audience at Stanford, that there are things lawyers now do in their heads, which can be done in a machine. “Law is fundamentally a computational exercise.””

RESOURCES

Association for the Advancement of Artificial Intelligence (AAAI)
“Founded in 1979, the Association for the Advancement of Artificial Intelligence (AAAI) (formerly the American Association for Artificial Intelligence) is a nonprofit scientific society devoted to advancing the scientific understanding of the mechanisms underlying thought and intelligent behavior and their embodiment in machines. AAAI aims to promote research in, and responsible use of, artificial intelligence. AAAI also aims to increase public understanding of artificial intelligence, improve the teaching and training of AI practitioners, and provide guidance for research planners and funders concerning the importance and potential of current AI developments and future directions.”

Berkman Klein Center (Harvard University)
“The Berkman Klein Center’s mission is to explore and understand cyberspace; to study its development, dynamics, norms, and standards; and to assess the need or lack thereof for laws and sanctions. We are a research center, premised on the observation that what we seek to learn is not already recorded. Our method is to build out into cyberspace, record data as we go, self-study, and share. Our mode is entrepreneurial nonprofit.” See Events.

Criminal Justice Data (Sunshine Foundation)
“As part of a new initiative, the Sunlight Foundation has amassed an inventory of publicly-available criminal justice data we’ve collected from all 50 states, the District of Columbia and the federal government. With that inventory, we created Hall of Justice, a resource for exploring the data and information we’ve identified.”

International Association for Artificial Intelligence and Law (IAAIL)
“IAAIL is a nonprofit association devoted to promoting research and development in the field of AI and Law, with members throughout the world. IAAIL organizes a biennial conference (ICAIL), which provides a forum for the presentation and discussion of the latest research results and practical applications and stimulates interdisciplinary and international collaboration.” See also Artificial and Intelligence Law (Springer); Featured Conferences.

Law, Order & Algorithms (Stanford University)
“Increasing transparency and accountability in law enforcement by compiling, analyzing and releasing a data set of more than 100 million highway patrol stops throughout the country.” See also Law, Order & Algorithms Course.

LawHelp Interactive (LSC)
“LawHelp Interactive is a project of Pro Bono Net, a nonprofit committed to increasing access to justice with technology, in cooperation with Ohio State Legal Services Association. The project is supported by the Legal Services Corporation, the State Justice Institute, and state courts in California, Montana, and New York. The HotDocs software has been donated by HotDocs Corporation. Thanks to our many collaborators, including Kaivo Software, Capstone Practice Systems, The Center for Access to Justice and Technology, and The Center for Computer-Assisted Legal Instruction.”

ROSS (IBM)
“ROSS is an artificially intelligent attorney to help you power through legal research. ROSS improves upon existing alternatives by actually understanding your questions in natural sentences like – “Can a bankrupt company still conduct business?” ROSS then provides you an instant answer with citations and suggests highly topical readings from a variety of content sources. ROSS is built upon Watson, IBM’s cognitive computer. Almost all of the legal information that you rely on is unstructured data—it is in the form of text, and not neatly situated in the rows and columns of a database. Watson is able to mine facts and conclusions from over a billion of these text documents a second. Meanwhile, existing solutions rely on search technologies that simply find keywords.”

Watson (IBM)
“IBM Watson is a technology platform that uses natural language processing and machine learning to reveal insights from large amounts of unstructured data.”

[1] “Mecha” is being used here in the sense in which it was popularized by the 2001 Steven Spielberg movie “AI”. See Artificial Intelligence: AI (IMDb 2001) (“In the not-so-far future the polar ice caps have melted and the resulting rise of the ocean waters has drowned all the coastal cities of the world. Withdrawn to the interior of the continents, the human race keeps advancing, reaching the point of creating realistic robots (called mechas) to serve them.”); A.I. Artificial Intelligence (Wikipedia) (“In the late 21st century, global warming has flooded the coastlines, wiping out coastal cities (such as Amsterdam, Venice, and New York City) and drastically reducing the human population. There is a new class of robots called Mecha, advanced humanoids capable of emulating thoughts and emotions.”).

[2] See, e.g., Sören Krach et al., Can Machines Think? Interaction and Perspective Taking with Robots Investigated via fMRI, 3(7) PLoS ONE e2597 (2008) (“With a mocked highly interactive game scenario confronting human participants with four interaction partners – a computer, a functionally designed robot, an anthropomorphic robot and a human confederate – we could demonstrate that participants increasingly engaged cortical regions corresponding to the classical Theory-of-Mind network the more the respective game partners exhibited human-like features.” Id. at 6).

[3] See generally Beverley Head, Why Cognitive Computing Is a Growth Engine for Businesses, Aus. Fin. Rev., July 8, 2016 (“Cognitive computing systems can help make sense of data embedded in more than 9 billion connected devices operating in the world today, which generate 2.5 quintillion bytes of new data daily.” And in practical terms cutting edge computers like Watson can digest mountains of information at breakneck speeds: “IBM’s Watson was the first commercially available cognitive computing platform. It is described by the company as able to analyse “high volumes of data and processes information more like a human than a computer”. Its prowess was demonstrated in 2011 when it leveraged its ability to read 30 million pages a second to win the US game show Jeopardy!”).

[4]Compare In re An Apple iPhone Seized During the Execution of a Search Warrant on a Black Lexus IS300, 2016 US Dist LEXIS 20543, at 1-2 (C.D. Cal. Feb. 16, 2016) (“For good cause shown, It Is Hereby Ordered that: 1. Apple shall assist in enabling the search of a cellular telephone, Apple make: iPhone 5C, . . . , on the Verizon Network, (the “Subject Device”) pursuant to a warrant of this Court by providing reasonable technical assistance to assist law enforcement agents in obtaining access to the data on the Subject Device.”) with In re Order Requiring Apple, Inc. to Assist in the Execution of a Search Warrant Issued by this Court, 149 F. Supp. 3d 341 (E.D.N.Y. 2016) (“For the reasons set forth below, I conclude that under the circumstances of this case, the government has failed to establish either that the AWA permits the relief it seeks or that, even if such an order is authorized, the discretionary factors I must consider weigh in favor of granting the motion.”).

Posted in: Legal Research