This chapter focuses on a particular normative concern associated with machine decision-making that has attracted considerable attention in policy debate—the problem of bias in algorithmic systems, which gives rise to various forms of ‘digital discrimination’. Digital discrimination entails treating individuals unfairly, unethically, or just differently based on their personal data that is automatically processed by an algorithm. Digital discrimination often reproduces the existing instances of discrimination in the offline world by either inheriting the biases of prior decision-makers, or simply reflecting widespread prejudices in society. The chapter highlights various forms and sources of digital discrimination, pointing to a rich and growing body of technical research seeking to develop technical responses aimed at correcting for, or otherwise removing, these sources of bias.
Oxford Scholarship Online requires a subscription or purchase to access the full text of books within the service. Public users can however freely search the site and view the abstracts and keywords for each book and chapter.
If you think you should have access to this title, please contact your librarian.