Panel 3: Metadata and Human Stereotypes in AI Systems

Starts at
Tue, Nov 7, 2023, 14:30 South Korea Time
( 07 Nov 23 05:30 UTC)
Finishes at
Tue, Nov 7, 2023, 16:00 South Korea Time
( 07 Nov 23 07:00 UTC)
Venue
Room 201

Presentations

How does metadata contribute to the reinforcement of human stereotypes in AI systems?

“偏见”是一个技术概念在机器学习,referring to situations where the training data used is not representative of the real world, leading to systematically skewed patterns or models. In contrast, "fairness" is a social concept that holds significant implications for users. According to Hannes Hapke et al., fairness is defined as the ability to identify when certain groups of people experience problems or differences compared to others. To illustrate this, consider a scenario where fairness becomes an issue in predicting credit extension for loans. If an AI model aims to determine who should be granted credit, fairness demands that those who don't pay back loans should have a different experience, i.e., their credit should not be extended. However, a problem arises if the AI model incorrectly denies loans to only individuals of a certain race. The digitalization of cultural heritage (CH) objects initially began for preservation purposes but has since enabled the application of AI technology to extract knowledge, enhancing user experiences and becoming a valuable resource for GLAM (Galleries, Libraries, Archives, and Museums). When generating digitalized data for AI use, it needs to be annotated meaningfully and relevantly for future ML tasks. Iconclass serves as a great example of such annotations. These annotations form the foundation on which AI models are built, making any embedded ideas and concepts in these structures apparent in the final AI model, thereby potentially reflecting the original biases present. The presentation will shed light on how metadata can unintentionally contain prejudices against the LGBT community, perpetuate gender inequality, or reinforce colonization stereotypes. We will demonstrate how these biases can permeate AI systems and underscore the importance of fairness in AI in general and for GLAM especially.

  • Artem Reshetnikov

    Barcelona Supercomputing Center

    Artem is an accomplished deep learning researcher at the Barcelona Supercomputing Center. With extensive experience in Computer Vision and Natural Language Processing, he skillfully applies these areas of expertise to his work. Throughout his life, Artem has nurtured a profound curiosity for history and art, and he has even completed various online courses in these subjects. For quite some time, he pondered how to unite his two primary passions: machine learning and art. Eventually, he found the perfect solution through his current project, Saint George on a Bike. This innovative endeavor aims to enrich the metadata of paintings using Deep Learning and NLP approaches, effectively bridging the gap between his interests.

    Artem's academic journey culminated in a Master's Degree in Engineering from the Autonomous University of Barcelona in 2019. Prior to this, he contributed his talents to several commercial projects that focused on Data Analysis, Computer Vision, and Anomaly Detection in marketing and retail sectors. Notably, he made significant contributions to companies like Indra and Tecnocom in Spain. These projects centered around harnessing the power of deep learning for tasks such as traffic counting through Computer Vision, analyzing time series data to detect anomalies in client behavior, and strategizing marketing efforts based on valuable insights.