Jump to ContentJump to Main Navigation
Human-Like Machine Intelligence$
Users without a subscription are not able to see the full content.

Stephen Muggleton and Nicholas Chater

Print publication date: 2021

Print ISBN-13: 9780198862536

Published to Oxford Scholarship Online: August 2021

DOI: 10.1093/oso/9780198862536.001.0001

Show Summary Details
Page of

PRINTED FROM OXFORD SCHOLARSHIP ONLINE (oxford.universitypressscholarship.com). (c) Copyright Oxford University Press, 2021. All Rights Reserved. An individual user may print out a PDF of a single chapter of a monograph in OSO for personal use. date: 02 December 2021

Interactive Learning with Mutual Explanations in Relational Domains

Interactive Learning with Mutual Explanations in Relational Domains

Chapter:
(p.338) 17 Interactive Learning with Mutual Explanations in Relational Domains
Source:
Human-Like Machine Intelligence
Author(s):

Ute Schmid

Publisher:
Oxford University Press
DOI:10.1093/oso/9780198862536.003.0017

With the growing number of applications of machine learning in complex real-world domains machine learning research has to meet new requirements to deal with the imperfections of real world data and the legal as well as ethical obligations to make classifier decisions transparent and comprehensible. In this contribution, arguments for interpretable and interactive approaches to machine learning are presented. It is argued that visual explanations are often not expressive enough to grasp critical information which relies on relations between different aspects or sub-concepts. Consequently, inductive logic programming (ILP) and the generation of verbal explanations from Prolog rules is advocated. Interactive learning in the context of ILP is illustrated with the Dare2Del system which helps users to manage their digital clutter. It is shown that verbal explanations overcome the explanatory one-way street from AI system to user. Interactive learning with mutual explanations allows the learning system to take into account not only class corrections but also corrections of explanations to guide learning. We propose mutual explanations as a building-block for human-like computing and an important ingredient for human AI partnership.

Keywords:   Explainable AI, Interpretable Machine Learning, Inductive Logic Programming, Interactive Learning, Human-AI-Partnership

Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.

Please, subscribe or login to access full text content.

If you think you should have access to this title, please contact your librarian.

To troubleshoot, please check our FAQs , and if you can't find the answer there, please contact us .