Options
Data, Power and Bias in Artificial Intelligence
Author(s)
Date Issued
2020-07-21
Date Available
2021-09-08T15:06:36Z
Abstract
Artificial Intelligence has the potential to exacerbate societal bias and set back decades of advances in equal rights and civil liberty. Data used to train machine learning algorithms may capture social injustices, inequality or discriminatory attitudes that may be learned and perpetuated in society. Attempts to address this issue are rapidly emerging from different perspectives involving technical solutions, social justice and data governance measures. While each of these approaches are essential to the development of a comprehensive solution, often discourse associated with each seems disparate. This paper reviews ongoing work to ensure data justice, fairness and bias mitigation in AI systems from different domains exploring the interrelated dynamics of each and examining whether the inevitability of bias in AI training data may in fact be used for social good. We highlight the complexity associated with defining policies for dealing with bias. We also consider technical challenges in addressing issues of societal bias.
Sponsorship
European Commission - European Regional Development Fund
Science Foundation Ireland
Type of Material
Conference Publication
Language
English
Status of Item
Peer reviewed
Conference Details
AI for Social Good: Harvard CRCS Workshop, Online, 20-21 July 2020
This item is made available under a Creative Commons License
File(s)
Loading...
Name
DataPowerBias_AI.pdf
Size
120.48 KB
Format
Adobe PDF
Checksum (MD5)
fdf3bb58fe65ea3563cab86592d04dbf
Owning collection
Mapped collections