ACT Workshop | Evaluating Outcomes and Impact of Legal Decision Support Online Tools
01/19/2021 • 9AM (ET) - 9h (HE)
Online (Zoom) - En ligne
Legal decision support online tools have emerged as popular tools to help citizens learn about their rights. These tools have the potential to make access to the law more effective and to increase access to justice in our societies by providing information that can be used to seek an amicable resolution to a dispute (leveraging, for example, Online Dispute Resolution systems). Hence, access to justice efforts led by legal institutions today tend to include such tools (SOQUIJ, Educaloi, PLEAC or Quebec Housing Administrative Tribunal).
The use of these tools is not without impact. It has long been recognized that technology is not neutral, that it perpetuates an ideology and "comes with an agenda for social change". As a result, the choices taken in the development and implementation of a legal decision support tool can lead to significant changes in the behaviors of legal institutions that use it. Further, the choices can have a significant impact on parties to a dispute and their level of access, understanding and satisfaction with the judicial or extra-judicial process in which they are involved, and even the outcome of the process itself.
Evaluating and understanding legal decision support tools and their impacts are therefore a crucial element of their development and implementation. Unfortunately, this is by no means easy. First, the goals that should be prioritized in creating legal decision support tools might not be evident. Fairness, efficiency, user experience and accuracy, for example, are all goals that are extremely important, but might not always align. For example, whether settling a case or going to court is preferable and leads to a better outcome can be highly debatable and might depend on the particular case and perspective. Second, even once the desirable outcome has been determined, the problem of how concretely to measure and assess the systems with regards to these goals remains complex. Third, decision support systems are often expected to cater to large, heterogenous user bases, with different goals, levels of legal knowledge and resources. Systems that work well for one of these groups might not work for other groups, making evaluation very difficult.
Therefore, it is important to think about different ways of evaluating decision support systems and their impacts. The ACT project invites participants to this workshop to discuss the methods, opportunities and challenges in the evaluation of decision support systems.
The discussion at the workshop will serve as a basis to determine a common evaluation methodology with a view to carrying out a series of comparative studies of legal decision support tools’ projects carried out by ACT partners, in order to contribute to the emergence of good practices in the field.
- How can we measure outcomes and impact of legal decision support tools?
- What serves as a desirable outcome in the legal system? Whose perspective (e.g. the legal system, parties, society etc.) should be given priority?
- Which goals should be prioritized in creating and evaluating decision support tools? How can the success of the systems with the regards to certain goals be measured?
- Which metrics can serve as useful indicators for the evaluation of a system? Which problems might they conceal?
- What concrete examples of frameworks and methods for evaluating decision support systems exist today? How well do they work in practice?
- How can decision support systems be continuously evaluated and improved during their use?
The workshop will be held on the 19 January 2020, between 9 and 12 ET time zone. It will be held entirely online. The speakers are invited to present for 30 minutes, followed by a discussion of 15 minutes with the participants.
|Erik Bornmann - CLEO||CLEO is developing an evaluation framework for interactive tools that support people who are going online to complete court forms and other law-related forms.||Case-study|
|Alexandra Pasca – McGill||Methodological challenges and good practices in the development of useful models||Research Project|
|Richard Rogers - CRT||User experience review (literacy aspects)||Case study|
BUTLER, Stacy and MAUET, Sarah and GRIFFIN, Christopher L. and PISH, Mackenzie, The Utah Online Dispute Resolution Platform: A Usability Evaluation and Report (September 8, 2020). Available at SSRN: https://ssrn.com/abstract=3696105 or http://dx.doi.org/10.2139/ssrn.3696105
CABANAS, E. ET E. ILLOUZ, Manufacturing Happy Citizens. How the Science and Industry of Happiness Control our Lives, Cambridge, Polity, 2019
EGAN, M., An Analysis of Richard H. Thaler and Cass R. Sunstein's Nudge, London, Macat Library, 2017
HUMMEL, D. ET A. MAEDCHE, «How effective is nudging? A quantitative review on the effect sizes and limits of empirical nudging studies», (2019) 80 Journal of Behavioral and Experimental Economics
SELA, A., «e-Nudging Justice: The Role of Digital Choice Architecture in Online Courts», (2019) 2019 Journal of Dispute Resolution 127
BERTENTHAL, Alyse, «Speaking of Justice: Encounters in a Legal Self-Help Clinic», (2016), PoLAR: Political and Legal Anthropology Review. 39. 261-275. 10.1111/plar.12193.
This content has been updated on 01/28/2021 at 10 h 06 min.