Jump to ContentJump to Main Navigation
Digital JusticeTechnology and the Internet of Disputes$

Ethan Katsh and Orna Rabinovich-Einy

Print publication date: 2017

Print ISBN-13: 9780190464585

Published to Oxford Scholarship Online: April 2017

DOI: 10.1093/acprof:oso/9780190464585.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2020. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 07 August 2020

Access to Digital Justice

Access to Digital Justice

Chapter:
(p.39) Chapter 2 Access to Digital Justice
Source:
Digital Justice
Author(s):

Ethan Katsh

Orna Rabinovich-Einy

Publisher:
Oxford University Press
DOI:10.1093/acprof:oso/9780190464585.003.0003

Abstract and Keywords

Chapter 2 offers a conceptual framework of access to digital justice through which the case studies in later chapters are analyzed. It opens with an overview of the origins of access to justice, the various barriers to justice, and the different approaches for addressing such barriers. The introduction of digital technology is impacting access to justice in multiple, sometimes contradictory, ways. On the one hand, it is creating many new disputes for which traditional dispute resolution mechanisms are often ineffective. On the other hand, it is facilitating the development of novel, accessible, and flexible online dispute resolution (ODR) and prevention (ODP) avenues. The chapter analyzes the conditions under which such novel processes can enhance access to justice and overcome the efficiency-fairness trade-off, a long-time characteristic of the traditional dispute resolution field.

Keywords:   access to justice, algorithms, confidentiality, fairness, efficiency, courts, technology, ODR, online dispute resolution, ODP, online dispute prevention

Ultimately the most basic values of society are revealed in its dispute-settlement procedures.

—Jerold S. Auerbach, Justice without Law?

“Before the Law stands a doorkeeper.” So begins Franz Kafka’s famous parable in The Trial about access to justice. It continues, “[t]‌o this doorkeeper comes a man from the country who asks to gain entry into the Law. But the doorkeeper says that he cannot grant him entry at the moment. The man thinks about it and then asks if he will be allowed to enter later. ‘It is possible,’ says the doorkeeper, ‘but not now …’ This is something that the man from the country had not expected. The Law, he thinks, should be accessible for everyone at all times.”1

There is reason today to wonder why, in an era of growing conflict and powerful information technologies, the resolution and prevention of disputes has not been a higher societal priority. In Kafka’s story, the doorkeeper remains unresponsive, and the man waits—for years, indeed for the rest of his life—hoping (unsuccessfully) to gain access. There have always been such doorkeepers and barriers standing in the way of simpler, easier, and cheaper forms of dispute resolution. Online dispute resolution (ODR) has the potential to remove, or at least lower, many of these barriers.

As Kafka’s parable continues, the man from the country is told that the doorkeeper he is speaking to is only the first of many. Others are waiting behind the doorkeeper, each posing new challenges of access to the Law. In our day, barriers to courts include architecture that is intimidating, costs that are significant, and knowledge systems that are complex. Physical distance that makes obtaining legal information inconvenient or burdensome is also a kind of doorkeeper. Administrative regulations, intricate court procedures, and the legal profession often serve as obstacles to access by keeping citizens at a distance from direct contact with the law. For some people in some places, technology itself is a doorkeeper that can interfere with efforts to assert rights and resolve disputes justly.

(p.40) Access to Justice in the Pre-Digital Era

Opening all these locked or partially open doors is something we must continue to strive to do. Today, new technologies are allowing us more and more opportunities to create a virtual “multi-door courthouse”—one much more accessible than the physical one proposed by early alternative dispute resolution enthusiasts.2 The original vision was of a courthouse that would lead different parties with varying kinds of disputes to the most appropriate dispute resolution process, each with its own set of characteristics, values, and goals. Digital technology and internet access allow for even more diverse options to be accessible from afar, anytime, anywhere. This new array of online dispute resolution spaces would encourage, rather than deny, entry.

The term “access to justice” is often equated with access to the courts and efforts for a less expensive, simpler, and faster legal process.3 The focus on the courts originated with the Access to Justice Movement of the 1960s.4 The movement shined a light on the barriers faced by low-income parties seeking to vindicate their rights in court. The reality, then and now, was that disputes rarely reached the courts due to the kinds of barriers mentioned above, such as the difficulty of a single parent to find the time, energy, expertise, and resources needed to litigate a court case, the challenges facing a disabled person in reaching the courthouse and conducting a trial, and the obstacles that individuals face when litigating against big business whose sophistication and knowledge of the law is often superior.5

The need to overcome the many barriers that stand in the way of bringing one’s case to court has been illustrated in the dispute resolution literature through the image of a “pyramid.”6 The pyramid’s wide base signifies the large number of grievances that exist, and its narrow top represents the very small portion of cases that reach the court system.7 The pyramid highlights the limits of a court-centric view of access to justice—a system that ignores the entire universe of disputes that never reach the formal justice system. It also underscores the reactive nature of dispute resolution mechanisms. These mechanisms rely on the grievant not only being aware of the problem but having the initiative and resources to seek redress. A person who is harassed may view such behavior as “part of life.” As a recent study noted, “when facing civil justice situations, people often do not consider law at all. They frequently do not think of these situations as legal, nor do they think of courts or of attorneys as appropriate providers of remedy.”8

Several decades ago, legal scholar Marc Galanter questioned the court-centric approach underlying our perception of “access to justice”:

Access to where? Where is the justice we want to admit people to? Where does it reside? Whose is it to dispense? It would be a distortion, (p.41) but perhaps a useful oversimplification, to conclude that the basic model of most inquiry into access to justice is, crudely, to get people and their grievances into court. This is too narrow: “court” has to be enlarged to include a variety of remedial agencies. And “getting in” has to be enlarged to include a variety of remedial agencies. And where agencies and complaints are mismatched, it extends to changing the character of the forum; and even to changing the character of the complainant, by providing means to recognize and aggregate diffuse claims. The access to justice project is permeated by an admirable willingness to challenge assumptions about institutional design. It has welcomed proposals to make the forum more suitable to the character of the dispute and the parties, and to make the disputants more capable of using the forum.9

The Access to Justice Movement focused efforts on lowering the costs of litigation for low-income disputants and leveling the playing field for those who could not previously reach the court.10 But this is still a court-centric view. By contrast, those advocating the use of mediation and arbitration were endeavoring to address the limitations of court-centralism more broadly. But even informal, flexible, and nonadversarial processes present barriers and challenges—such as the need to devote time, money, and effort—and require active pursuit by aggrieved parties. The goal of ODR, on the other hand, is to design new processes and institutions which are usable both in and out of court. This would not only lower traditional barriers, but anticipate issues and link dispute resolution with proactive problem-solving institutions.

The most obvious barriers to the ideal of access to courts are economic ones: the need to pay a filing fee and to hire a lawyer. In addition, there are costs related to the time and energy that parties have to devote to litigation, which include having to miss work, attend court sessions, meet with one’s lawyers, and strategize over the case. For low-income disputants, this often means that they cannot pursue high-stakes disputes if an attorney is not provided to them. But even individuals of higher income levels have often found that the costs of litigation would exceed its expected benefits where, for example, the amount in dispute is relatively low and the costs associated with litigation are high due to legal uncertainty or reputational stakes.11

There are also geographic, psychological, linguistic, and cultural barriers. Geographic barriers have to do with the unavailability of legal services in certain locations.12 People living in large cities have easier access to courts and to a wider array of legal services to choose from as compared to those living in rural and remote areas where there is limited access to courts and a shortage of lawyers. This state of affairs echoes, to some extent, gaps in socioeconomic status, but also extends beyond them, unequally impacting groups such as people with (p.42) disabilities.13 A centralized system has also meant that the courts within cities and larger towns are not community-based and cannot offer localized justice, which depends on familiarity with neighborhood problems, needs, and resources.14

Psychological barriers involve the nonfinancial costs associated with having to go through a lengthy, oftentimes intrusive, legal process.15 These barriers are subtle in nature, taking place in people’s minds, sometimes subconsciously, but they can have a profound effect in preventing not only the filing of a claim, but the recognition that a party has suffered legal harm; that a particular person or entity is responsible for such harm; and that they are entitled to redress should they pursue their rights in court.16 An injury suffered by a child while playing with their toy could be attributed to carelessness by the child, the negligence of the adult supervising the child, or to the faulty design of the toy. Each of these options relies on a different understanding of the relevant facts and rules and is associated with differing levels of sophistication in terms of familiarity with the law.

Finally, linguistic and cultural barriers make court procedures hard to decipher and therefore intimidating. For those speaking a foreign language, legalese is twice removed; even communication with their lawyer, if they can afford one, is challenging.17 The availability, quality, and costs of interpreters vary and have therefore not been effective in removing linguistic barriers.18 Linguistic barriers also often overlap with cultural ones, driven by expectations, assumptions, and customs that are different from those that such parties might have been expecting (such as the meaning of silence by a witness on the stand). But even local disputants face cultural barriers when they arrive in court, where judges employ their own assumptions and expectations, shaping their interpretation of the law and its application in specific cases (for example, assumptions about how a “reasonable person” would act under a certain set of circumstances are determined by our own experiences and vary greatly from one social group to another). For those who come from different social strata than judges, these rulings and procedures may seem foreign, incomprehensible, or unjust.

The Three Waves of the Access to Justice Movement

The initial recognition in the post–World War II era that courts were largely inaccessible generated a range of reform efforts. The calls for enhancing access to justice focusing on financial barriers and the need for making legal aid lawyers available have been described as the “first wave” of the Access to Justice Movement.19 Typically, these reform proposals extended legal aid to the poor, granted relief from court fees, and lowered the costs of court processes by such means as creating flexible and informal small claims courts or liberalizing the delivery of legal services.20

(p.43) The “second wave,” which took place in the 1970s, took a broader view of the need for access, strengthening disadvantaged groups (as opposed to individuals) through public interest litigation and class actions.21 This broader outlook expanded the focus from the interests of the poor to more diffuse concerns—such as environmental issues and consumer grievances—which are often more critical to the middle class (credit card owners, homeowners, etc.).22

In the decades that followed, the original court-centric approach that stood at the heart of access to justice gave way to a more expansive approach, one that recognized the important role that simpler, more accessible procedures could have. These developments, constituting a “third wave,” led to the expansion of alternative dispute resolution (ADR) approaches and various attempts to simplify court procedures, such as loosening procedures and using laypeople on the bench, as well as the adoption of mediation as an alternative to court.23

This is where the ADR movement and the Access to Justice Movement began to converge. As the ADR movement matured, courts were no longer viewed as the sole or even principal site for obtaining justice, and a broader vision of “justice in many rooms,”24 or the “multi-door courthouse”25—where different processes aligned with different disputes—became popular. This new vision of justice prioritized meeting individual interests and needs over the protection of rights and the establishment of standards. The ADR movement critiqued litigation not only for the high costs and lengthy proceedings associated with it but also, as stated by ADR scholar Carrie Menkel-Meadow, for its adversarial nature and “limited remedial imagination,” which could destroy relationships, lead to suboptimal outcomes, and result in overall dissatisfaction.26

Consensual resolutions were now preferred over judicial decisions.27 Procedural justice theories advocated the adoption of processes that allowed for disputants to be heard, thereby enhancing perceptions of fairness by the parties.28 It was claimed that ADR processes (and mediation in particular) could deliver a different kind of process that allowed for direct party involvement and that focused on what parties felt and needed as opposed to what they were entitled to under law.29 It was expected that these processes would result in more satisfactory and creative outcomes toward the goal of preserving an ongoing relationship.30

In reality, however, mediation and arbitration were not always successful in realizing the hopes for swifter, cheaper, and less adversarial dispute resolution processes. While ADR sought to reduce access barriers, it could not eliminate them entirely given the ultimate need to rely on human capacity and to meet in a physical space. In addition, some of mediation’s qualitative advantages were lost over time, when ADR techniques began to be employed by judges or where court cases were referred to mediation. Instead of providing parties with opportunities for direct participation in telling their story, Professor Nancy Welsh and others uncovered how mediation sessions became another arena in which lawyers and (p.44) legalese shaped the narrative and, ultimately, the outcomes.31 As the use of ADR became widespread in courts, these processes were, in a sense, co-opted.32 For the courts, however, even a thin version of ADR provided a welcome relief from the burden of their ever-growing caseload, even if it did not deliver the fuller promise that accompanied the adoption of such processes.

Our contemporary dispute resolution landscape now includes combinations of formalized dispute resolution out of court, alongside informal judging in court.33 This development has received significant criticism for being the worst of all worlds. In doing so, critics claim, both the unique nature of ADR as well as the commitment to public values and goals associated with formal litigation are being sacrificed.34 In the most well-known criticism of the institutionalization of ADR, Owen Fiss stated that settlement (ADR included) should be treated “as a highly problematic technique for streamlining dockets … and although dockets are trimmed, justice may not be done.”35 In spite of critiques, however, alternative dispute resolution spread widely, mainly due to the enhanced efficiency these processes offered. Indeed, today ADR is our primary form of dispute resolution.

As part of its widespread adoption, ADR has expanded in numerous ways, including prelitigation, with the adoption of “conflict management systems.” These are internal units of companies and organizations that are responsible for resolving and preventing conflict, particularly workplace-related disputes.36 This extension of the reach of ADR in the 1990s was viewed by some as enhancing access to justice, allowing aggrieved employees and customers to air disputes without having to deal with the costs and difficulties associated with litigation.37 Others, under a more legal-centric approach, viewed this development as yet another step in the curtailment of access to justice as the landscape of dispute resolution became increasingly privatized.38 As organizations instituted these internal ADR systems, these units would, over time, not only resolve disputes but also engage in what would later be termed “dispute prevention.” Until recently, however, prevention was mostly a peripheral and infrequent activity conducted in private dispute resolution settings,39 and far less effective without the kind of data collection and analysis available today as the foundation of technology-based dispute prevention activities.

As the use of information technologies and internet communication expanded from the mid-1990s onward, the numbers, characteristics, and scope of disputes changed, influencing the challenge of access to justice in different—even sometimes conflicting—directions. On the one hand, technology has exacerbated the problem of access to justice by generating a staggering number of disputes for which both courts and ADR are inadequate. It has also raised concerns about the fairness and quality of justice delivered through private online platforms shaped by algorithms and the use of Big Data. In addition, it has led to a system in which participation requires agreeing to contractual terms that are not understood or even noticed. Many of these very effectively block users’ access to justice.40

(p.45) On the other hand, the architecture of the internet facilitates the development of flexible, convenient, inexpensive, and speedy dispute resolution and prevention processes that do not require meeting face to face. These new systems can handle numbers of complaints previously unimaginable. There are also new opportunities for more quality control and monitoring than was possible in the past. The use of algorithms, enhanced capacity, lower costs, and greater degree of consistency associated with automated systems lays the foundation for a new reality of increased access to justice.41

And yet we must remember that the introduction of algorithms and Big Data into the dispute resolution arena is hardly a one-way, positive-only development.42 As the use of private platforms spreads and the complexity and opaqueness of algorithms increase, new barriers and challenges to traditional beliefs about access to justice are being established.

Improving Access to Justice through ODR

Despite the growth of the ODR field and the development of promising new mechanisms, there are still enormous numbers of disputes for which there is no access to justice and no effective redress. This reality, unfortunately, has largely been overlooked. The assumption is perhaps that for significant conflicts, face-to-face mechanisms—such as courts and ADR processes—could provide effective redress. For small-scale disputes, the expectation might be that the market would take care of them—that private platforms would provide avenues for addressing problems as part of the necessary competition to attract and retain users. In reality, for many of these conflicts, courts, even small claims courts, and ADR are not feasible, and even not permitted. As will be discussed later, almost all large companies require users to sign agreements requiring the use of arbitration in the event of a dispute. The effect of requiring one particular form of dispute resolution has been to reduce the options available to consumers. For example, mandatory arbitration clauses typically prevent consumers from organizing a class action lawsuit.

Arbitration could, of course, be of value to consumers if it were actually used. Unfortunately, that rarely happens. A recent study by legal scholar Judith Resnik43 reported the following:

According to information from the American Arbitration Association (AAA), designated by AT&T to administer its arbitrations and complying with state reporting mandates, 134 individual claims (about 27 a year) were filed against AT&T between 2009 and 2014. During that time period, the estimated number of AT&T wireless customers rose from 85 million to 120 million people, and lawsuits filed by the federal (p.46) government charged the company with a range of legal breaches, including systematic overcharging for extra services and insufficient payments of refunds when customers complained.

More generally, the AAA, which is the largest non-profit provider of arbitration services in the United States, averages under 1,500 consumer arbitrations annually; its full docket includes 150,000 to 200,000 filings a year. Thus, were arbitration providers to be in high demand, their capacity to respond would be limited. An estimated 290 million people have cell phones, and “99.9% of subscribers” to the eight major wireless services are subject to arbitration clauses. For those with credit card debt, about 50% face arbitration, as do more than 30 million employees. Virtually all of these arbitration clauses bar class actions in courts or in arbitration, and to the extent that use of the court system is permitted, individuals are routed to small claims courts that also do not provide collective procedures.

Restricting consumers to one form of redress in online contracts is a subtle but highly effective form of “digital injustice.” (It should be noted that such clauses in consumer contracts are banned in the European Union.44) But a formal ban on access to dispute resolution avenues is often not the cause of preventing disputants from turning to ODR. The simple lack of available channels for raising and resolving their problems online is enough. Throughout this book we uncover contexts in which ODR avenues are unavailable or selectively available, often at the sole discretion of privately run, for-profit platforms—some of which have become larger than nation states in user numbers, more powerful in terms of access to data on their users, and highly significant to users’ daily lives.

Resolution and prevention activities in the digital era can reshape our expectations about access to justice. In each of the particular dispute arenas and systems analyzed in this book, we will examine the new disputing environment and the ways in which it challenges access to justice, as well as the existing (or the lack of) online dispute resolution and prevention efforts and the ways in which they facilitate and/or encumber access to justice. In our discussion, we distinguish between “dispute resolution” and “dispute prevention” efforts to reflect the new setting in which prevention is no longer peripheral to, and reliant upon, dispute resolution activities but is becoming a central arena for addressing conflict.

Online Dispute Resolution

Expanding access to justice through ODR involves three major shifts in dispute resolution practices. These are the shift from a physical, face-to-face setting to a virtual one; the shift from human intervention and decision making (p.47) to software-supported processes; and the shift from an emphasis on the value of confidentiality to an emphasis on collecting, using, and reusing data in order to prevent disputes. From the perspective of the dispute resolution triangle, the first shift is largely one of greater convenience, the second, increased expertise, and the third a particular challenge in building trust. As the three shifts that come with the introduction of technology into dispute resolution shape a growing part of the dispute resolution landscape, the core of dispute resolution will gravitate from the act of resolution itself (the heart of dispute resolution conducted by human third parties in a face-to-face setting) to the pre-resolution stage of software design on the one hand and to the post-resolution stage of data analysis and dispute prevention efforts on the other hand.

The first shift has to do with the delivery of ODR services online—without having to meet in person or even communicate with one another synchronously. In the past, access to dispute resolution was inherently constrained by the need to meet in a physical location at a given time. The costs of orchestrating such an operation created significant barriers which prevented some disputants from pursuing their claims. The ease with which complaints can be made online and the convenience of communicating from one’s own computer or phone (whether a party to the dispute or a third party) has reduced costs dramatically and, consequently, lowered the bar for airing disputes.45 This is perhaps the most obvious manner in which ODR has impacted access to justice.

In face-to-face dispute resolution, human and organizational capacity limits the number of disputes that can be handled. Algorithms underlie the handling of very large numbers of disputes and, as a result, can provide access to justice in numbers never before possible.46 By shifting from human intervention to software, ODR is able to handle extremely large numbers of disputes with speedy and low-cost outcomes. The collection of data through ODR also provides the means for developing and refining algorithms that can identify patterns on the sources of disputes (for example, sellers’ ambiguous shipping policies) or effectiveness of various strategies for the resolution of disputes (for example, the stage in which dispute resolution is first offered), which can then be employed to prevent disputes and improve dispute resolution processes.

The scope and capabilities of the technological Fourth Party (an ODR metaphor for technology used in dispute resolution) are currently in the midst of a highly significant transition: from applications that focus on communication and convenience to software that employs algorithms and exploits the intelligence of machines. This may, at times, remove the need for a mediator, customer service representative, or other dispute handler. This is what we refer to as the shift from human intervention to one assisted by software, and from a process that simply facilitates communication of information to one that processes it.

(p.48) An algorithm is simply a procedure or formula for making a decision.47 Algorithms are embedded in software and guide decision-making processes as users indicate choices and preferences in an interactive process. Algorithms can be useful when an issue or problem is able to be resolved by following a set of rules. Cybersettle’s blind bidding algorithm mentioned earlier is a simple example. Airbnb’s decisions on which rentals to display and in what order are determined by algorithms. Google’s algorithm for deciding the order for displaying search results is probably the most famous example of such machine-based decision making.

In the consumer context, if an Amazon user receives a broken toaster and files a complaint, there is an algorithm that decides how or whether to resolve the dispute without human intervention. An algorithm might have different rules and cause different outcomes depending on numerous factors: whether the buyer is an Amazon Prime member; a frequent buyer; an infrequent returner of goods; the item is not expensive; or some combination of all these. If all of the above can be answered “yes,” Amazon might simply tell the buyer that she need not return the toaster and that she can choose between getting a new toaster for free and having her money returned. A different outcome might come about if only one or two of the rules were met. The appeal of algorithms to a company like Amazon is that an algorithm can do the work of many humans. Companies like Amazon could not exist at the scale they do without designing and relying on algorithms. As they reported in an Annual Report to Shareholders, “many of the important decisions we make at Amazon.com can be made with data. There is a right answer or a wrong answer, a better answer or a worse answer, and math tells us which is which. These are our favorite kinds of decisions.”48 Clearly, algorithms can enhance access and efficiency dramatically. The question that follows is: what is their impact on justice and fairness? Algorithms have the potential to improve fairness in dispute resolution in several respects. For example, they hold a promise for enhanced consistency and limited discretion, as opposed to the relaxed environment in which many human “third-party” dispute resolvers operate.49 Human mediators have broad discretion as to the structure of the process, the role they carve out for the parties and their attorneys, whether they conduct private sessions with each of the parties, the degree of involvement they employ in the substance of the disputes, and so forth. Such broad discretion has become less and less acceptable with the institutionalization of mediation in the court setting. As part of our public legal culture, we have an expectation for similar cases to be addressed similarly, and associate similar procedures with similar outcomes.

Appropriate design and language choices in ODR can also help reduce cognitive biases in both parties and human dispute resolvers, and this also improves the ability to reach high-quality outcomes.50 One example of this would be a (p.49) feature in the Smartsettle software that allows parties who reach a settlement to optimize their resolution. Smartsettle requires that parties assign numerical values to each of their interests. Based on such values, parties choose a resolution that represents a combination of the various interests and is acceptable to each of them. The software examines the way in which the parties ranked their interests and analyzes whether at least one of the parties’ interests can be better met without making the other party worse off. If there is an alternative solution, the parties are presented with it; they can then either choose the proposed agreement offered by the software or remain with the resolution they originally negotiated.51 Software can also educate disempowered disputants about their options, enabling them to make informed choices.52 In the future, this software may become so effective as to even be preferable to the assistance of a costly lawyer or a reluctant third party who may wish to remain detached and whose views may be shaped by unconscious biases that tend to favor powerful disputants.53

Finally, the third shift associated with ODR, the move from processes that value confidentiality in resolving disputes to processes that are also focused on the collection and use of data, creates a new opportunity to redirect attention toward prevention. The documentation of data in digital form poses new risks to privacy and runs counter to the assumptions that have shaped face-to-face ADR, where the privacy of the proceedings has been considered a central feature and has resulted in minimal documentation and a lack of transparency in proceedings and their outcomes.54 However, data collection also allows for quality control over software design and human decision making in ways that are not always present or even possible in courts. Such monitoring can allow, for example, a study of the impact of the procedural design of the various elements of the ODR system on different types of disputants (low socioeconomic background, minorities, women, non-English speakers, etc.), so we might find that some processes do not successfully offer justice to some segments of the population, and attempt to correct that.55 The data can also provide visual displays that may be easier to understand than text, such as video tutorials explaining the law in various fields or diagrams of the procedural options for claimants.

There is, however, a real concern that opaque algorithms with biases built in will detract from the fairness of dispute resolution processes. This concern has to do with the accuracy of the algorithms, the possibility of errors in the data the operation of the algorithm employs, in misguided reliance on correlations revealed by Big Data, as well as mistaken predictions that underlie the design of the algorithm relating to such matters as which passengers could be potential terrorists or which tax returns should be selected for auditing.56 Another concern relates to the impact automation may have on particular members of protected (or disempowered) groups. Some worry that algorithms may discriminate by using identity-related considerations and that they may have a disparately (p.50) negative impact on members of some protected groups in a more indirect fashion by relying on skewed databases.57 For example, people of color may have a higher representation in some databases due to biases in the measurement and selection of data for such datasets, skewing further analyses based on such data.

There is also the troubling fact that “[c]‌lassic values … such as due process are not easily coded into software language,” resulting in “erroneous denials of benefits, lengthy delays and troubling outcomes.”58 Such difficulties may be reinforced by programmer biases and lack of relevant knowledge.59 These concerns are heightened by the fact that most of us cannot see what drives the operation of algorithms, nor the precise mode of operation of the algorithm. Of course, entities operating the algorithms are reluctant to release that data publicly due to intellectual property concerns.60

All of these possibilities were highlighted in the Microsoft “bot” fiasco in March 2016 in which Tay, a “chatbot”61 the company designed to engage with users in light conversation, ended up posting offensive statements such as “feminism is cancer” and that the Holocaust was made up. Microsoft apologized publicly, stating it would revive Tay “only if its engineers could find a way to prevent Web users from influencing the chatbot in ways that undermine the company’s principles and values.”62 It is worth noting that Microsoft’s Chinese chatbot presented a completely different experience, perhaps due to the restrictions that exist on speech more generally in China.63

Faulty or vulnerable algorithms like this aren’t just offensive. As we discuss in more detail in Chapter 4, when medical devices that communicate wirelessly and continuously are implanted in someone’s body, imperfect programming of the underlying algorithm can actually kill.64 Even if the need for transparency about what data, values, and assumptions drive algorithms were addressed, concerns have been raised regarding the effectiveness65 of potential solutions, such as the use of audit trails to the algorithmic process66 and requiring use of open code.67 And what about learning algorithms, whose mode of operation changes over time in a manner that defies consistency and is often not discernible?68 Audit trails could prove helpful, but they may not alleviate all concerns and are not yet common.69 Some have advanced the need to meet due process requirements in the design and operation of software and in Big Data–related analysis.70

While algorithms are imperfect, there have always been problematic aspects of traditional modes of dispute resolution as well.71 In fact, we have always been willing to accept some problematic aspects of dispute resolution processes based on the understanding that attempts to increase such processes’ fairness are inevitably costly in terms of efficiency. Both court-philes and ADR enthusiasts have viewed the trade-off between efficiency and fairness as inherent to dispute resolution.72 By efficiency, we mean (1) the reduced costs, time, and effort that come with simple, loose procedures, and (2), the pareto optimal outcomes that (p.51) can result from an interest-based negotiation that is conducted in a flexible and confidential environment. By fairness, we refer to (1) procedural principles and protections,73 and (2) efforts to ensure that procedures do not yield systemic biases in terms of outcomes to members of disadvantaged groups. Such efforts therefore depend on the availability of explicit procedural rules and transparency. Thus, the dilemma between the efficiency and satisfaction gained by loose and flexible ADR procedures (e.g., increased “access”) on the one hand, and the cost to consistency and fairness by foregoing the detailed procedures and due process protections associated with courts (e.g., increased “justice”) on the other hand, have colored efforts to enhance access to justice in both private and public face-to-face dispute resolution.

It may be that—in terms of access to justice—the most significant contribution of ODR has to do with overcoming the trade-off between efficiency and fairness. The combination of data collection, communication, and ODR software opens up the possibility of increasing both efficiency and fairness, which can be translated into an increase in both “access” and “justice.” Whether this potential is realized or not depends on the design of the software, the criteria for the evaluation of ODR processes, and the nature of dispute prevention activities. This is because the three shifts that come with the introduction of technology into dispute resolution gravitate the core of dispute resolution from the act of resolution itself (the heart of dispute resolution conducted by human third parties in a face-to-face setting) to the software design stage on the one hand and to the data analysis and dispute prevention efforts on the other hand.

Online Dispute Prevention

The familiar understanding of dispute resolution processes is as reactive mechanisms called into action by an aggrieved party. Dispute prevention, however, relies on tracing patterns of disputes and addressing them. These activities could occur post-dispute resolution based on data gathered as part of the resolution effort, or they could take place even before the aggrieved party is aware of a problem, the problem’s scope, and who might be responsible. While dispute prevention might not increase access to justice in a direct sense, it could reduce occurrences of injustice and barriers to justice.

Some dispute prevention activities were pursued in the past, but the analysis of data about disputes was manual and limited. The identification of patterns was an activity that dispute professionals only performed through long-term familiarity with the environment in which they operated, by drawing on their personal experience in resolving conflicts in a particular setting and on the institutional memory.74 In order to be effective in a face-to-face environment, dispute (p.52) prevention required that a dispute resolver be familiar with a wide pool of present and past disputes, as well as able to identify patterns of disputes that had a common source. That common source could then be addressed in an attempt to prevent similar problems from arising in the future.75

The phenomenon of Big Data multiplies the possibility for dispute prevention perhaps ad infinitum. The ability to search and cross-check various types of data can generate important insights into the sources of disputes for different groups of disputants across various settings, and the insights gleaned from the data can be used for both effective solutions as well as prevention of future mishaps. Problems can be uncovered almost instantaneously, addressed even before they are detected by users and well before resolution has taken place. The discussion of Wikipedia’s bots in Chapter 5, for example, provides a good demonstration of how software can detect abuse instantaneously (albeit not flawlessly), reducing instances of inaccurate editing on the site. And even in those cases where disputes are not prevented but are dealt with through ODR, the insights reaped from the resolution efforts are fed back into the prevention realm.

Data documentation and the study of such data, which are at the heart of dispute prevention–related activities, also allow for quality control and monitoring of the degree to which such activities are conducted in a fair, unbiased, and evenhanded manner. The emphasis on prevention and the shift toward proactive dispute prevention in itself can dramatically change the “access” component in access to justice, in part by lifting the onus for obtaining justice from the individual to the entity that collects data on the disputes. This is moving ODR further and further away from ADR. These types of developments could transform the dispute pyramid, opening up the sides of the pyramid toward a rectangular shape in which a larger proportion of disputes are addressed through dispute resolution and prevention activities.76 Such developments may impact justice, not only efficiency, because the ability to recognize that an injury meriting compensation has occurred (in other words, the ability to “name, blame, and claim”) often correlates with an aggrieved party’s socioeconomic status.77 By overcoming the need to rely on the aggrieved party’s ability to recognize and pursue a remedy, a larger portion of society’s problems can be addressed and prevented regardless of the aggrieved party’s awareness of his or her injury. In the social network context, content moderation and prescreening of text, pictures, and videos may be able to prevent the uploading of hurtful content, thereby preventing harm that may be difficult to undo after the fact.

But who are the entities that collect this data, and what drives their prevention agenda? How do we ensure that such efforts not only enhance “access” but also “justice”? It is important to know who decides what types of problems to prevent and what criteria such decisions are based on in order to ensure that private entities do not, for example, prevent problems related to sellers more than those related (p.53) to buyers. Complaints have been raised, for example, that social media platforms have limited free speech under the guise of content moderation and prevention efforts, with some voices hinting at commercial and political interests playing a role in such decisions.78 Where algorithms are employed in online dispute prevention activities, some of the concerns that have been raised regarding predictive algorithms, such as opacity and lack of consistency, also become relevant.

The same qualities of prevention-related activities that make quality control efforts possible also raise serious concerns about data protection and the privacy of users. This is particularly worrisome where data is used for the benefit of the company at the expense of its users. One such example is the “Facebook experiment” (and the lack of transparency that surrounded it)79 when Facebook sought to uncover the impact of positive versus negative feeds on Facebook on users’ moods.80 Facebook received harsh criticism for conducting the experiment without the prior consent of users. In another example, Facebook conducted a different experiment to examine whether notifications could encourage people to vote—raising concerns over “digital gerrymandering.”81 These examples underscore the scope of power exercised by megaplatforms with access to a huge amount of data on many millions of users. This power can be used (and abused) for a wide range of purposes, many of which we cannot even yet imagine.82

Since dispute prevention activities are even more opaque than dispute resolution ones, incentives for engaging in such activities in a rigorous and fair manner are lacking. This is even more pronounced given the private, for-profit nature of platforms that engage in these activities. Microsoft apologized for their chatbot experiment with Tay, presenting the mishap as a learning experience, as it “cannot fully predict all possible human interactive misuses without learning from mistakes.”83 Others, however, have criticized Tay’s user-instigated sexist and racist slurs as the outcome of Microsoft’s poor design and deficient monitoring of its bot.84 While companies like Microsoft worry about security breaches and other potential hazards, these companies currently seem to attach less significance to the prevention of trolling. It is precisely such choices and the incentives for making them that require close scrutiny.85 A principal challenge of the new and expanding area of online dispute prevention will therefore be the development of appropriate guidelines and monitoring practices for prevention-related activities.

Technology and the use of algorithms—as compared to human intervention—may either exacerbate these problems or ameliorate them. Such considerations need to inform decisions over the design of dispute systems and take into account many factors, including:

  • the scope of problems to be addressed,

  • the decision whether to implement a fully or partially automated systems86 and the degree of human involvement, (p.54)

  • the voice given to various stakeholders in defining problems and means for addressing them,

  • the manner in which power differentials and conflicts of interest are dealt with,

  • the documentation of prevention-related activities, and

  • the degree of transparency offered regarding such actions.

We can expect that platforms will realize that they need to provide fair and efficient channels for raising, resolving, and preventing disputes if they are to gain the trust of their users and survive in the online environment.87 However, simply offering ODR may prove insufficient. ODR mechanisms that do not take into consideration aspects such as those mentioned above—that only provide selective redress for problems, limited opportunities for voice, or unequal opportunities for involvement in the design of the software underlying dispute resolution systems—are unfair and untrustworthy. In other words, the manner in which dispute resolution and prevention are designed and performed will shape the degree of both “access” and “justice” enjoyed by users. What’s more, the intervention of public authorities in realizing digital justice will be needed to help demonstrate the fairness of the process. As Schmitz and Rule wrote, “To have uninvolved third parties examine the detailed operations of an ODR system and then vouch for the fairness of that system can be enormously helpful in maintaining user trust.”88

In each of the settings examined in Part II of this book—e-commerce, healthcare, social networks, employment, and courts—we will analyze the prevalence, origins, and nature of disputes, the availability and workings of online dispute resolution, and the existence and effectiveness of dispute prevention efforts. In as much as dispute resolution and prevention activities are rigorous, balanced, and effective, they can truly enhance access to justice. Unfortunately, many ODR schemes currently fall short of these expectations or are unavailable altogether for certain wrongs, generating instead access to injustice. Understanding how and why they do so helps us determine what the landscape for dispute resolution and prevention should look like in the future.

Notes:

(1.) Franz Kafka, THE TRIAL 267 (E. M. Butler ed., Willa Muir & Edwin Muir trans., Alfred A. Knopf, Inc. 1956).

(2.) Frank E. A. Sander, Varieties of Dispute Processing, in THE POUND CONFERENCE: PERSPECTIVES ON JUSTICE IN THE FUTURE (A. Levin & R. Wheelers eds., 1979). The vision of a “multidoor courthouse” was presented by Professor Sander at the Pound Conference as part of an effort to reduce courts’ caseload and improve access to courts.

(3.) Deborah L. Rhode, ACCESS TO JUSTICE (2004) [hereinafter RHODE, ACCESS TO JUSTICE]; Benjamin P. Cooper, Access to Justice without Lawyers, 47 AKRON L. REV. 205, 207 (2014); Marc Galanter, Access to Justice in a World of Expanding Social Capability, 37 FORDHAM URB. L.J. 115, 115 & n.1, 122 (2010); William C. Vickrey et al., Access to Justice: A Broader (p.188) Perspective, 42 LOY. L.A. L. REV. 1147, 1154 (2009). For a broader perspective, see Rebecca L. Sandefur, The Fulcrum Point of Equal Access to Justice: Legal and Non-Legal Institutions of Remedy, 42 LOY. L.A. L. REV. 949 (2009); William Davis & Helga Turku, Access to Justice and Alternative Dispute Resolution, 2011 J. DISP. RESOL. 47 (2011); Lawrence M. Friedman, Access to Justice: Some Historical Comments, 37 FORDHAM URB. L.J. 3 (2010); Steven H. Hobbs, Shout from Taller Rooftops: A Response to Deborah L. Rhode’s Access to Justice, 73 FORDHAM L. REV. 935 (2004).

(4.) On the changes in the legal and political environment that allowed such a movement to emerge in the United States, see RHODE, ACCESS TO JUSTICE, supra note 3, at 62–69.

(5.) Deborah L. Rhode, Symposium: The Constitution of and the Good Society: Access to Justice, 69 FORDHAM L. REV. 1785, 1785 (2001) (stating, “[m]‌illions of Americans lack any access to the system, let alone equal access”).

(6.) Richard E. Miller & Austin Sarat, Grievances, Claims, and Disputes: Assessing the Adversary Culture, 15 LAW & SOC’Y REV. 52 (1980–81). While the dispute resolution pyramid has been widely used, there have also been alternative conceptions of the evolution of disputes, as evidenced in the dispute resolution tree. See Catherine R., Albiston, Lauren B. Edelman, and Joy Milligan. The Dispute Tree and the Legal Forest. 10 Annual Review of Law and Social Science 105 (2014).

(7.) Miller & Sarat, supra note 6, at 61 (stating, “[t]‌he overall picture is of a remedy system that minimizes formal conflict but uses the courts when necessary in those relatively rare cases in which conflict is unavoidable”).

(8.) Rebecca L. Sandefur, Accessing Justice in the Contemporary USA: Findings from the Community Needs and Services Study, AM. B. FOUND. 16 (2014), http://www.americanbarfoundation.org/uploads/cms/documents/sandefur_accessing_justice_in_the_contemporary_usa._aug._2014.pdf.

(9.) Marc Galanter, Justice in Many Rooms: Courts, Private Ordering, and Indigenous Law, 13 J. LEGAL PLURALISM & UNOFFICIAL L. 1 (1981) [hereinafter Galanter, Justice in Many Rooms].

(10.) Marc Galanter, Why the “Haves” Come Out Ahead: Speculations on the Limits of Legal Change, 9 LAW & SOC’Y REV. 95 (1974).

(11.) 3 Mauro Cappelletti & Bryant G. Garth, ACCESS TO JUSTICE: EMERGING ISSUES AND PERSPECTIVES 9–10 (Mauro Cappelletti & Bryant G. Garth eds., 1978) [hereinafter 3 CAPPELLETTI & GARTH]. It is interesting to note that this has not always been the case with courts: “In colonial America, local courts were ‘on the whole, cheap, informal and accessible.’ Today they are, on the whole, expensive, highly formalized, and effectively unavailable to all but wealthy individuals and businesses.” See Gillian K. Hadfield, Innovating to Improve Access: Changing the Way Courts Regulate Legal Markets, 143 DAEDALUS 83, 84 (2014).

(12.) Mark Blacksell, Social Justice and Access to Legal Services: A Geographical Perspective, 21 GEOFORUM 489 (1990).

(13.) David A. Larson, Access to Justice for Persons with Disabilities: An Emerging Strategy, 3 LAWS 220 (2014).

(14.) 3 CAPPELLETTI & GARTH, supra note 11, at 10.

(15.) Id.

(16.) William L. F. Felstiner et al., The Emergence and Transformation of Disputes: Naming, Blaming, Claiming … , 15 LAW & SOC’Y REV. 631 (1980).

(17.) See Konstantina Vagenas, Table of Contents, A National Call to Action: Access to Justice for Limited English Proficient Litigants: Creating Solutions to Language Barriers in State Courts, NAT’L CTR. ST. CTS. (2013), http://www.ncsc.org/services-and-experts/areas-of-expertise/language-access/~/media/files/pdf/services%20and%20experts/areas%20of%20expertise/language%20access/call-to-action.ashx.

(18.) Charles M. Grabau & Llewellyn Joseph Gibbons, Protecting the Rights of Linguistic Minorities: Challenges to Court Interpretation, 30 NEW ENG. L. REV. 227, 255–60. (1996).

(19.) 1 Mauro Cappelletti & Bryant Garth, ACCESS TO JUSTICE: A WORLD SURVEY (Mauro Cappelletti ed., 1978) [hereinafter 1 CAPPELLETTI & GARTH].

(20.) 3 CAPPELLETTI & GARTH, supra note 11, at 401–03.

(21.) 1 CAPPELLETTI & GARTH, supra note 19, at 21.

(22.) 3 CAPPELLETTI & GARTH, supra note 11, at 173.

(23.) Mauro Cappalletti & Bryant Garth, Access to Justice: The Newest Wave in the Worldwide Movement to Make Rights Effective, 27 BUFF. L. REV. 181, 225 (1978).

(25.) Sandefur, supra note 3.

(26.) For the distinction between “quantitative” and “qualitative” advantages, see Carrie Menkel-Meadow, Pursuing Settlement in an Adversary Culture: A Tale of Innovation Co-Opted or “The Law of ADR,” 19 FLA. ST. U. L. REV. 1, 6 (1991) [hereinafter Menkel-Meadow, Pursuing Settlement]. For a critique of courts’ “limited remedial imagination,” see id. at 3.

(27.) Carrie Menkel-Meadow, When Litigation Is Not the Only Way: Consensus Building and Mediation as Public Interest Lawyering, 10 WASH. U. J. L. & POL’Y 37, 42 (2002).

(28.) Nancy Welsh et al., The Application of Procedural Justice Research to Judicial Actions and Techniques in Settlement Sessions, in THE MULTI-TASKING JUDGE: COMPARATIVE JUDICIAL DISPUTE RESOLUTION (Tania Sourdin & Archie Zariski eds., 2013); Tom R. Tyler, PSYCHOLOGY AND THE DESIGN OF LEGAL INSTITUTIONS (2007). On procedural justice in mediation, see Nancy A. Welsh, Making Deals in Court-Connected Mediation: What’s Justice Got to Do with It?, 79 WASH. U. L. Q. 787, 817 (2001).

(29.) Carrie Menkel-Meadow, When Dispute Resolution Begets Disputes of Its Own: Conflicts Among Dispute Professionals, 44 UCLA L. REV. 1871, 1872–73 (1997) [hereinafter Menkel-Meadow, When Dispute Resolution Begets Disputes of Its Own]; RHODE, ACCESS TO JUSTICE, supra note 3, at 87.

(30.) For literature providing various justifications for the formation of an alternative dispute resolution system, see David B. Lipsky ET AL., EMERGING SYSTEMS FOR MANAGING WORKPLACE CONFLICT: LESSONS FROM AMERICAN CORPORATIONS FOR MANAGERS AND DISPUTE RESOLUTION PROFESSIONALS 76–78 (2003) [hereinafter LIPSKY, EMERGING SYSTEMS]; Carrie Menkel-Meadow ET AL., DISPUTE RESOLUTION: BEYOND THE ADVERSARIAL MODEL 6–13 (2d ed. 2010) [hereinafter MENKEL-MEADOW, DISPUTE RESOLUTION]; Robert H. Mnookin ET AL., BEYOND WINNING: NEGOTIATING TO CREATE VALUE IN DEALS AND DISPUTES 100–01 (2000); Deborah R. Hensler, Our Courts, Ourselves: How the Alternative Dispute Resolution Movement Is Re-Shaping Our Legal System, 108 PENN. ST. L. REV. 165, 171 (2003); Menkel-Meadow, When Dispute Resolution Begets Disputes of Its Own, supra note 29, at 1872–75; Jacqueline M. Nolan-Haley, Court Mediation and the Search for Justice through Law, 74 WASH. U. L. Q. 47, 54–55 (1996).

(31.) Nancy A. Welsh, Look Before You Leap and Keep on Looking: Lessons from the Institutionalization of Court-Connected Mediation, 5 NEV. L.J. 399, 407–08 (2004); Nancy A. Welsh, The Current Transitional State of Court-Connected ADR, 95 MARQ. L. REV. 873, 874 (2012); MENKEL-MEADOW, DISPUTE RESOLUTION, supra note 30, at 406–09; Menkel-Meadow, Pursuing Settlement, supra note 26, at 6; Jacqueline Nolan-Haley, Mediation as the “New Arbitration,” 17 HARV. NEGOT. L. REV. 61, 73–89 (2012).

(32.) Carrie Menkel-Meadow, Regulation of Dispute Resolution in the United States of America: From the Formal to the Informal to the “Semi-Formal,” in REGULATING DISPUTE RESOLUTION: ADR AND ACCESS TO JUSTICE AT THE CROSSROADS 419 (Felix Steffek et al. eds., 2013).

(33.) Tania Sourdin & Archie Zariski, THE MULTI-TASKING JUDGE: COMPARATIVE JUDICIAL DISPUTE RESOLUTION (2013).

(35.) Owen M. Fiss, Against Settlement, 93 YALE L. J. 1073, 1075 (1984).

(36.) LIPSKY, EMERGING SYSTEMS, supra note 30. (Drawing on a wide-scale empirical study of Fortune 1000 companies’ corporate conflict strategies conducted by the authors that analyzed, among other things, the proliferation of internal dispute resolution systems, the sources of such growth and future developments).

(37.) RHODE, ACCESS TO JUSTICE, supra note 3, at 87.

(38.) Lauren B. Edelman et al., Internal Dispute Resolution: The Transformation of Civil Rights in the Workplace, 27 LAW & SOC’Y REV. 497 (1993).

(39.) Only in recent decades have certain problem solving courts engaged in dispute prevention activities in an attempt to reduce recidivism and address the “revolving door” phenomenon. See Greg Berman & John Feinblatt, Problem-Solving Courts: A Brief Primer, 23 LAW & POL’Y 125, 126 (2001); Bruce J. Winick, Therapeutic Jurisprudence and Problem-Solving Courts, 30 FORDHAM URB. L.J. 1055, 1056 (2003).

(40.) Anjanette H. Raymond, Yeah, But Did You See the Gorilla? Creating and Protecting an Informed Consumer in Cross-Border Online Dispute Resolution, 19 HARV. NEG. L. REV. 129 (2014).

(41.) Richard Susskind, TOMORROW’S LAWYERS: AN INTRODUCTION TO YOUR FUTURE 85–86 (2013) (conveying a broad understanding of access to justice in the digital age, one which includes not only dispute resolution but also dispute containment, avoidance, and “legal health promotion”).

(42.) Frank Pasquale & Glyn Cashwell, Four Futures of Legal Automation, 63 UCLA L. REV. DISCOURSE 26, 39 (2015); Maayan Perel & Niva Elkin-Koren, Accountability in Algorithmic Copyright Enforcement, 19 STANFORD TECH. L. REV. (forthcoming, 2016) (on the challenges presented by algorithms to transparency); Tal Zarsky, The Trouble with Algorithmic Decisions: An Analytic Road Map to Examine Efficiency and Fairness in Automated and Opaque Decision Making, 41 SCI., TECH. & HUM. VALUES 118, 122–23 (2015) [hereinafter Zarsky, The Trouble with Algorithmic Decisions]; Tal Z. Zarsky, Automated Prediction: Perception, Law, and Policy, 55 COMM. OF THE ACM, Sept. 2012, at 33, 35 [hereinafter Zarsky, Automated Prediction].

(43.) Judith Resnik, Diffusing Disputes: The Public in the Private of Arbitration, the Private in Courts, and the Erasure of Rights, 124 YALE L.J. 2804 (2015).

(44.) Pablo Cortes, ONLINE DISPUTE RESOLUTION FOR CONSUMERS IN THE EUROPEAN UNION 107–09 (2010); Julia Hornle, Legal Controls on the Use of Arbitration Clause in B2C E-Commerce Contracts, 2 MASARYK U. J. L. & TECH. 23, 28–33 (2008).

(45.) Colin Rule, ONLINE DISPUTE RESOLUTION FOR BUSINESS: B2B, E-COMMERCE, CONSUMER EMPLOYMENT, INSURANCE, AND OTHER COMMERCIAL CONFLICTS 61–71, 77 (2002).

(46.) Richard E. Susskind & Daniel Susskind, THE FUTURE OF THE PROFESSIONS: HOW TECHNOLOGY WILL TRANSFORM THE WORK OF HUMAN EXPERTS 101–02 (2015).

(47.) For other definitions, see Danielle Keats Citron, Technological Due Process, 85 WASH. U. L. REV. 1249, 1257 n.45 (2008) (referencing a definition of an algorithm as a “mechanical or recursive computational procedure”); Michael L. Rich, Machine Learning, Automated Suspicion Algorithms, and the Fourth Amendment 164 U. PA. L. REV. 871, 876 (2016) (referencing a definition of algorithms as “sequences of instructions to convert some input into an output”).

(49.) See SUSSKIND, supra note 41, at 89 (stating that with software, the “rules are embedded in the system. Failure to comply is not an option.”); Anupam Chander, The Racist Algorithm?, 115 MICH. L. REV. (forthcoming, 2017); Citron, supra note 47, at 1253; Orna Rabinovich-Einy, Technology’s Impact: The Quest for a New Paradigm for Accountability in Mediation, 11 HARV. NEGOT. L. REV. 253, 264 (2006) [hereinafter Rabinovich-Einy, Technology’s Impact].

(50.) Rabinovich-Einy, Technology’s Impact, supra note 49, at 276–78; Anjanette H. Raymond & Scott J. Shackelford, Technology, Ethics and Access to Justice: Should an Algorithm Be Deciding Your Case?, 35 MICH. J. INT’L L. 485, 522 (2014) [hereinafter Raymond & Shackelford, Technology, Ethics and Access to Justice]; Zarsky, Automated Prediction, supra note 42, at 35.

(51.) RULE, supra note 45.

(52.) See discussion in Chapter 7 on courts and how software programs like A2J and ODR sites in courts can help translate legal rules and options to plain English tailored advice to disputants and potential disputants.

(53.) Trina Grillo, The Mediation Alternative: Process Dangers for Women, 100 YALE L.J. 1545, 1567–69 (1991); Zarsky, Automated Prediction, supra note 42, at 35.

(54.) Rabinovich-Einy, Technology’s Impact, supra note 49, at 263–64.

(55.) Id. at 266, 273, 289–90.

(56.) Citron, supra note 47, at 1249; Zarsky, The Trouble with Algorithmic Decisions, supra note 42, at 121.

(57.) Zarsky, The Trouble with Algorithmic Decisions, supra note 42, at 126; Tal Z. Zarsky, Understanding Discrimination in the Scored Society, 89 WASH. L. REV. 1375 (2014).

(58.) Pasquale & Cashwell, supra note 42, at 38.

(59.) Citron, supra note 47, at 1261–62.

(60.) Citron, supra note 47; Zarsky, The Trouble with Algorithmic Decisions, supra note 42 at 123–30.

(61.) What Is Chat Bot?, WEBOPEDIA, http://www.webopedia.com/TERM/C/chat_bot.html.

(62.) Microsoft “Deeply Sorry” for Racist and Sexist Tweets by AI Chatbot, THE GUARDIAN (Mar. 26, 2016), http://www.theguardian.com/technology/2016/mar/26/microsoft-deeply-sorry-for-offensive-tweets-by-ai-chatbot.

(63.) Sarah Jeong, How to Make a Bot that Isn’t Racist, MOTHERBOARD (Mar. 25, 2016), http://motherboard.vice.com/read/how-to-make-a-not-racist-bot.

(64.) “Computerized medical devices can fail in many ways, including through programming errors, incorrect calibration, and exposure to malicious intrusions, as well as physical or medical errors. Over a thousand recalls were issued on software-based medical devices from 1999 to 2005. Hundreds of deaths have been attributed to software failure in medical devices.” Long Comment Regarding a Proposed Exemption Comment of a Coalition of Medical Device Researchers in Support of Proposed Class 27: Software—Networked Medical Devices, at 2, U.S. COPYRIGHT OFF. LIBR. CONGRESS, http://copyright.gov/1201/2015/comments-020615/InitialComments_longform_Coalition_of_Medical_Device_Researchers_Class27.pdf.

(66.) Id. at 130. See also Amy J. Schmitz, Secret Consumer Scores and Segmentations: Separating “Haves” from “Have-Nots,” 2014 MICH. ST. L. REV. 1411, 1469 (2014) (stating, in the consumer rating context, that auditing procedures need to be put in place to supervise use of data and to ensure the legitimacy of automated decision-making systems).

(67.) Citron, supra note 47, at 1309.

(68.) Rich, supra note 47, at 66.

(69.) Citron, supra note 47; Julia Angwin, Make Algorithms Accountable, N.Y. TIMES (Aug. 1, 2016), http://www.nytimes.com/2016/08/01/opinion/make-algorithms-accountable.html?_r=0

(70.) Citron, supra note 47; Kate Crawford & Jason Schultz, Big Data and Due Process: Toward A Framework to Redress Predictive Privacy Harms, 55 BC L. Rev. 93 (2014).

(71.) Anjanette H. Raymond & Scott J. Shackelford, Jury Glasses: Wearable Technology and Its Role in Crowdsourcing Justice, 17 CARDOZO J. CONFLICT RESOL. 115, 129 (2015); Zarsky, The Trouble with Algorithmic Decisions, supra note 42, at 120, 122. In many respects, this criticism is reminiscent of that of “litigation romanticists” against ADR enthusiasts in that the former tended to dismiss the problematic aspects of courts’ operation while thoroughly criticizing ADR (see Carrie Menkel-Meadow, Whose Dispute Is It Anyway?, 83 GEO. L.J. 2663, 2669 (1995)).

(72.) RHODE, ACCESS TO JUSTICE, supra note 3, at 86. Indeed, the trade-off conception is so strong that it continues to color the discussion on ODR. See Raymond & Shackelford, Technology, Ethics and Access to Justice, supra note 50, at 487. The writing on ODR has also assumed the existence of a trade-off. Julia Hornle, CROSS-BORDER INTERNET DISPUTE RESOLUTION (2009) 17; Arno R. Lodder & John Zeleznikow, ENHANCED DISPUTE RESOLUTION THROUGH THE USE OF INFORMATION TECHNOLOGY 21 (2010).

(73.) For a definition of fairness that builds on due process and general theories of procedural fairness, see HORNLE, supra note 72. We complement this approach by looking at the disparate effects of procedural arrangements on outcomes.

(74.) Orna Rabinovich-Einy, Deconstructing Dispute Classifications: Avoiding the Shadow of the Law in Dispute System Design in Healthcare, 12 CARDOZO J. DISP. RESOL. 55, 78–80 (2010).

(75.) Id.

(76.) Some readers may question the desirability of expanding the number of complaints that are redressed out of a fear of over-litigiousness. In this vein, Lawrence Friedman stated that “we cannot have a system that provides unlimited access to justice; the pyramid must remain a pyramid rather than become a square.” See Friedman, supra note 3 (our view is different). We share Prof. Rhode’s view that the focus should not be on over litigiousness, but rather on “inaccessible rights and remedies.” RHODE, ACCESS TO JUSTICE, supra note 3, at 5. Unlike Rhode, though, we look beyond courts and rights to nonlegal institutions and problems.

(77.) Felstiner et al., supra note 16, at 636.

(78.) Joseph Cox & Jason Koebler, Facebook Decides Which Killings We’re Allowed to See, MOTHERBOARD (July 7, 2016), http://motherboard.vice.com/read/philando-castile-facebook-live; Kalev Leetaru, Is the Internet Evolving Away from Freedom of Speech?, FORBES (Jan. 15, 2016), http://www.forbes.com/sites/kalevleetaru/2016/01/15/is-the-internet-evolving-away-from-freedom-of-speech/.

(79.) Adrienne LaFrance, Even the Editor of Facebook’s Mood Study Thought It Was Creepy, THE ATLANTIC (June 28, 2014), http://www.theatlantic.com/technology/archive/2014/06/even-the-editor-of-facebooks-mood-study-thought-it-was-creepy/373649/; Galen Panger, Reassessing the Facebook Experiment: Critical Thinking about the Validity of Big Data Research, in INFORMATION, COMMUNICATION & SOCIETY 1 (2015).

(80.) The Muse, The Facebook Experiment: What It Means for You, FORBES (Aug. 4, 2014), http://www.forbes.com/sites/dailymuse/2014/08/04/the-facebook-experiment-what-it-means-for-you/.

(81.) See Jonathan Zittrain, Facebook Could Decide an Election without Anyone Ever Finding Out, NEW REPUBLIC (June 1, 2014), https://newrepublic.com/article/117878/information-fiduciary-solution-facebook-digital-gerrymandering. Jonathan Zittrain discussed the possibility of digital gerrymandering by Facebook or Twitter, supporting the concept of these platforms as “information fiduciaries” as a potential constraint on their power.

(82.) Id.; Zeynep Tufekci, Engineering the Public: Big Data, Surveillance and Computational Politics, 19 FIRST MONDAY (July 15, 2014), http://firstmonday.org/article/view/4901/4097 (describing the dangers of computational politics in the age of Big Data).

(83.) Peter Lee, Learning from Tay’s Introduction, Official Microsoft Blog, MICROSOFT (Mar. 25, 2016), http://blogs.microsoft.com/blog/2016/03/25/learning-tays-introduction/#sm.0000fbizbacc7e7ay492gmco4gmct.

(84.) Jeong, supra note 63.

(86.) Citron, supra note 47, at 1271–72 (revealing the failures of automated systems and the significance of human involvement in the decision-making process, especially in light of the “automation bias,” which “effectively turns a computer program’s suggested answer into a trusted final decision”); Id. at 1303–04 (stating that “[a]‌utomation is more attractive where the risks associated with human bias outweigh that of automation bias. It is advantageous when an issue does not require the exercise of situation-specific discretion. Decisions best addressed with standards should not be automated”); Raymond & Shackelford, Technology, Ethics and Access to Justice, supra note 50, at 517.

(88.) AMY J. SCHMITZ & COLIN RULE, THE NEW HANDSHAKE: ONLINE DISPUTE RESOLUTION AND THE FUTURE OF CONSUMER PROTECTION 69 (forthcoming, on file with authors). See also Id. at 78–79 (stating, “independent evaluatorsshould play a role in ensuring the fairness if these privately created processes”); Orly Lobel, The Law of the Platform, 101 MINN. L. REV. (forthcoming, 2016) (discussing the need to require platforms to disclose data and assist in its analysis).