google-site-verification=jFDbZ4ruzOvvqJby1LcnAXSTcZtDkjDoejPD0-FcsZs AI Explainability Reconsidered
top of page

AI Explainability Reconsidered

16.1.23

Co-sponsored by AI, Ethics & Law Community - TAD center

Hofit Wasserman, PhD candidate TAU Law, presented her work on reconsidering the concept of explainability, in regards to AI. Discussants: Prof. Lilian Edwards, Newcastle Law School, UK and Prof. Ran Gilad Bachrach, Eng. Faculty, TAU

As the predictive power of AI gradually replaces human decision makers, calls for a "right for explanation" of automated systems gain simultaneous support. Accordingly, the technological domain entrusted with executing this right is continuously sharpening the professional tool, often related to as "explainability" or "interpretability", to generate explanations for end-users and decision subjects. This discussion explores the rudimentary question: Does explainability benefit end users? Leveraging insights from the legal and ML domains, a multidisciplinary analysis uncovers key complexities in the quest to generate users' explainability, and critically questions the intuitive appeal of AI explanations as a means to assist users in an automated world.

Recording
Documents

Images from the event

bottom of page